mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-24 02:20:09 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
e8ad1e490e
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12207-1.html)
|
||||
[#]: subject: (Using mergerfs to increase your virtual storage)
|
||||
[#]: via: (https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/)
|
||||
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
|
||||
@ -12,17 +12,15 @@
|
||||
|
||||
![][1]
|
||||
|
||||
如果你想在一个媒体项目中用上多个磁盘或分区,而又不想丢失任何现有数据,但又想将所有文件都存放在一个驱动器下,该怎么办?这就是 mergefs 派上用场的地方!
|
||||
如果你想在一个媒体项目中用到了多个磁盘或分区,不想丢失任何现有数据,但又想将所有文件都存放在一个驱动器下,该怎么办?这时,mergefs 就能派上用场!
|
||||
|
||||
[mergerfs][2] 是旨在简化跨多个商业存储设备文件的存储和管理的联合文件系统。
|
||||
[mergerfs][2] 是一个联合文件系统,旨在简化存储和管理众多商业存储设备上的文件。
|
||||
|
||||
你需要从[这个][3] github 页面获取最新的 RPM。Fedora 的版本名称中带有 _**fc**_ 和版本号。例如,以下是 Fedora 31 的版本:
|
||||
|
||||
[mergerfs-2.29.0-1.fc31.x86_64.rpm][4]
|
||||
你需要从他们的 [GitHub][3] 页面获取最新的 RPM。Fedora 的版本名称中带有 “fc” 和版本号。例如,这是 Fedora 31 的版本: [mergerfs-2.29.0-1.fc31.x86_64.rpm][4]。
|
||||
|
||||
### 安装和配置 mergefs
|
||||
|
||||
使用 sudo 安装已下载的 mergefs 软件包:
|
||||
使用 `sudo` 安装已下载的 mergefs 软件包:
|
||||
|
||||
```
|
||||
$ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm
|
||||
@ -47,7 +45,7 @@ total 2
|
||||
-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv
|
||||
```
|
||||
|
||||
在此例中挂载了两块磁盘,分别为 _disk1_ 和 _disk2_。两个驱动器都有一个包含文件的 _**Videos**_ 目录。
|
||||
在此例中挂载了两块磁盘,分别为 `disk1` 和 `disk2`。两个驱动器都有一个包含文件的 `Videos` 目录。
|
||||
|
||||
现在,我们将使用 mergefs 挂载这些驱动器,使它们看起来像一个更大的驱动器。
|
||||
|
||||
@ -55,19 +53,17 @@ total 2
|
||||
$ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media
|
||||
```
|
||||
|
||||
mergefs 手册页非常广泛且复杂,因此我们将说明上面提到的选项。
|
||||
|
||||
* _defaults_:除非指定,否则将使用默认设置。
|
||||
* _allow_other_:允许 sudo 或 root 以外的用户查看文件系统。
|
||||
* _use_ino_:让 mergefs 提供文件/目录 inode 而不是 libfuse。虽然不是默认值,但建议你启用它,以便链接的文件共享相同的 inode 值。
|
||||
* _category.create=mfs_:根据可用空间在驱动器间传播文件。
|
||||
* _moveonenospc=true_:如果启用,那么如果写入失败,将进行扫描以查找具有最大可用空间的驱动器。
|
||||
* _minfreespace=1M_:最小使用空间值。
|
||||
* _disk1_:第一块硬盘。
|
||||
* _disk2_:第二块硬盘。
|
||||
* _/media_:挂载驱动器的目录。
|
||||
|
||||
mergefs 手册页非常庞杂,因此我们将说明上面提到的选项。
|
||||
|
||||
* `defaults`:除非指定,否则将使用默认设置。
|
||||
* `allow_other`:允许 `sudo` 或 `root` 以外的用户查看文件系统。
|
||||
* `use_ino`:让 mergefs 提供文件/目录 inode 而不是 libfuse。虽然不是默认值,但建议你启用它,以便链接的文件共享相同的 inode 值。
|
||||
* `category.create=mfs`:根据可用空间在驱动器间传播文件。
|
||||
* `moveonenospc=true`:如果启用,那么如果写入失败,将进行扫描以查找具有最大可用空间的驱动器。
|
||||
* `minfreespace=1M`:最小使用空间值。
|
||||
* `disk1`:第一块硬盘。
|
||||
* `disk2`:第二块硬盘。
|
||||
* `/media`:挂载驱动器的目录。
|
||||
|
||||
看起来是这样的:
|
||||
|
||||
@ -84,7 +80,7 @@ $ df -hT | grep media
|
||||
|
||||
继续示例:
|
||||
|
||||
有一个叫 _Baby’s second Xmas.mkv_ 的 30M 视频。让我们将其复制到用 mergerfs 挂载的 _/media_ 文件夹中。
|
||||
有一个叫 `Baby's second Xmas.mkv` 的 30M 视频。让我们将其复制到用 mergerfs 挂载的 `/media` 文件夹中。
|
||||
|
||||
```
|
||||
$ ls -lh "Baby's second Xmas.mkv"
|
||||
@ -103,7 +99,7 @@ $ df -hT | grep media
|
||||
1:2 fuse.mergerfs 66M 31M 30M 51% /media
|
||||
```
|
||||
|
||||
从磁盘空间利用率中可以看到,因为 disk1 没有足够的可用空间,所以 mergefs 自动将文件复制到 disk2。
|
||||
从磁盘空间利用率中可以看到,因为 `disk1` 没有足够的可用空间,所以 mergefs 自动将文件复制到 `disk2`。
|
||||
|
||||
这是所有文件详情:
|
||||
|
||||
@ -135,7 +131,7 @@ via: https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/
|
||||
作者:[Curt Warfield][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (lxbwolf)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,166 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LazyWolfLin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I containerize a build system)
|
||||
[#]: via: (https://opensource.com/article/20/4/how-containerize-build-system)
|
||||
[#]: author: (Ravi Chandran https://opensource.com/users/ravichandran)
|
||||
|
||||
How I containerize a build system
|
||||
======
|
||||
Building a repeatable structure to deliver applications as containers
|
||||
can be complicated. Here is one way to do it effectively.
|
||||
![Containers on a ship on the ocean][1]
|
||||
|
||||
A build system is comprised of the tools and processes used to transition from source code to a running application. This transition also involves changing the code's audience from the software developer to the end user, whether the end user is a colleague in operations or a deployment system.
|
||||
|
||||
After creating a few build systems using containers, I think I have a decent, repeatable approach that's worth sharing. These build systems were used for generating loadable software images for embedded hardware and compiling machine learning algorithms, but the approach is abstract enough to be used in any container-based build system.
|
||||
|
||||
This approach is about creating or organizing the build system in a way that makes it easy to use and maintain. It's not about the tricks needed to deal with containerizing any particular software compilers or tools. It applies to the common use case of software developers building software to hand off a maintainable image to other technical users (whether they are sysadmins, DevOps engineers, or some other title). The build system is abstracted away from the end users so that they can focus on the software.
|
||||
|
||||
### Why containerize a build system?
|
||||
|
||||
Creating a repeatable, container-based build system can provide a number of benefits to a software team:
|
||||
|
||||
* **Focus:** I want to focus on writing my application. When I call a tool to "build," I want the toolset to deliver a ready-to-use binary. I don't want to spend time troubleshooting the build system. In fact, I'd rather not know or care about the build system.
|
||||
* **Identical build behavior:** Whatever the use case, I want to ensure that the entire team uses the same versions of the toolset and gets the same results when building. Otherwise, I am constantly dealing with the case of "it works on my PC but not yours." Using the same toolset version and getting identical output for a given input source file set is critical in a team project.
|
||||
* **Easy setup and future migration:** Even if a detailed set of instructions is given to everyone to install a toolset for a project, chances are someone will get it wrong. Or there could be issues due to how each person has customized their Linux environment. This can be further compounded by the use of different Linux distributions across the team (or other operating systems). The issues can get uglier quickly when it comes time for moving to the next version of the toolset. Using containers and the guidelines in this article will make migration to newer versions much easier.
|
||||
|
||||
|
||||
|
||||
Containerizing the build systems that I use on my projects has certainly been valuable in my experience, as it has alleviated the problems above. I tend to use Docker for my container tooling, but there can still be issues due to the installation and network configuration being unique environment to environment, especially if you work in a corporate environment involving some complex proxy settings. But at least now I have fewer build system problems to deal with.
|
||||
|
||||
### Walking through a containerized build system
|
||||
|
||||
I created a [tutorial repository][2] you can clone and examine at a later time or follow along through this article. I'll be walking through all the files in the repository. The build system is deliberately trivial (it runs **gcc**) to keep the focus on the build system architecture.
|
||||
|
||||
### Build system requirements
|
||||
|
||||
Two key aspects that I think are desirable in a build system are:
|
||||
|
||||
* **Standard build invocation:** I want to be able to build code by pointing to some work directory whose path is **/path/to/workdir**. I want to invoke the build as: [code]`./build.sh /path/to/workdir`[/code] To keep the example architecture simple (for the sake of explanation), I'll assume that the output is also generated somewhere within **/path/to/workdir**. (Otherwise, it would increase the number of volumes exposed to the container, which is not difficult, but more cumbersome to explain.)
|
||||
* **Custom build invocation via shell:** Sometimes, the toolset needs to be used in unforeseen ways. In addition to the standard **build.sh** to invoke the toolset, some of these could be added as options to **build.sh**, if needed. But I always want to be able to get to a shell where I can invoke toolset commands directly. In this trivial example, say I sometimes want to try out different **gcc** optimization options to see the effects. To achieve this, I want to invoke: [code]`./shell.sh /path/to/workdir`[/code] This should get me to a Bash shell inside the container with access to the toolset and to my **workdir**, so I can experiment as I please with the toolset.
|
||||
|
||||
|
||||
|
||||
### Build system architecture
|
||||
|
||||
To comply with the basic requirements above, here is how I architect the build system:
|
||||
|
||||
![Container build system architecture][3]
|
||||
|
||||
At the bottom, the **workdir** represents any software source code that needs to be built by the software developer end users. Typically, this **workdir** will be a source-code repository. The end users can manipulate this source code repository in any way they want before invoking a build. For example, if they're using **git** for version control, they could **git checkout** the feature branch they are working on and add or modify files. This keeps the build system independent of the **workdir**.
|
||||
|
||||
The three blocks at the top collectively represent the containerized build system. The left-most (yellow) block at the top represents the scripts (**build.sh** and **shell.sh**) that the end user will use to interact with the build system.
|
||||
|
||||
In the middle (the red block) is the Dockerfile and the associated script **build_docker_image.sh**. The development operations people (me, in this case) will typically execute this script and generate the container image. (In fact, I'll execute this many, many times until I get everything working right, but that's another story.) And then I would distribute the image to the end users, such as through a container trusted registry. The end users will need this image. In addition, they will clone the build system repository (i.e., one that is equivalent to the [tutorial repository][2]).
|
||||
|
||||
The **run_build.sh** script on the right is executed inside the container when the end user invokes either **build.sh** or **shell.sh**. I'll explain these scripts in detail next. The key here is that the end user does not need to know anything about the red or blue blocks or how a container works in order to use any of this.
|
||||
|
||||
### Build system details
|
||||
|
||||
The tutorial repository's file structure maps to this architecture. I've used this prototype structure for relatively complex build systems, so its simplicity is not a limitation in any way. Below, I've listed the tree structure of the relevant files from the repository. The **dockerize-tutorial** folder could be replaced with any other name corresponding to a build system. From within this folder, I invoke either **build.sh** or **shell.sh** with the one argument that is the path to the **workdir**.
|
||||
|
||||
|
||||
```
|
||||
dockerize-tutorial/
|
||||
├── build.sh
|
||||
├── shell.sh
|
||||
└── swbuilder
|
||||
├── build_docker_image.sh
|
||||
├── install_swbuilder.dockerfile
|
||||
└── scripts
|
||||
└── run_build.sh
|
||||
```
|
||||
|
||||
Note that I've deliberately excluded the **example_workdir** above, which you'll find in the tutorial repository. Actual source code would typically reside in a separate repository and not be part of the build tool repository; I included it in this repository, so I didn't have to deal with two repositories in the tutorial.
|
||||
|
||||
Doing the tutorial is not necessary if you're only interested in the concepts, as I'll explain all the files. But if you want to follow along (and have Docker installed), first build the container image **swbuilder:v1** with:
|
||||
|
||||
|
||||
```
|
||||
cd dockerize-tutorial/swbuilder/
|
||||
./build_docker_image.sh
|
||||
docker image ls # resulting image will be swbuilder:v1
|
||||
```
|
||||
|
||||
Then invoke **build.sh** as:
|
||||
|
||||
|
||||
```
|
||||
cd dockerize-tutorial
|
||||
./build.sh ~/repos/dockerize-tutorial/example_workdir
|
||||
```
|
||||
|
||||
The code for [build.sh][4] is below. This script instantiates a container from the container image **swbuilder:v1**. It performs two volume mappings: one from the **example_workdir** folder to a volume inside the container at path **/workdir**, and the second from **dockerize-tutorial/swbuilder/scripts** outside the container to **/scripts** inside the container.
|
||||
|
||||
|
||||
```
|
||||
docker container run \
|
||||
--volume $(pwd)/swbuilder/scripts:/scripts \
|
||||
--volume $1:/workdir \
|
||||
--user $(id -u ${USER}):$(id -g ${USER}) \
|
||||
--rm -it --name build_swbuilder swbuilder:v1 \
|
||||
build
|
||||
```
|
||||
|
||||
In addition, the **build.sh** also invokes the container to run with your username (and group, which the tutorial assumes to be the same) so that you will not have issues with file permissions when accessing the generated build output.
|
||||
|
||||
Note that [**shell.sh**][5] is identical except for two things: **build.sh** creates a container named **build_swbuilder** while **shell.sh** creates one named **shell_swbuilder**. This is so that there are no conflicts if either script is invoked while the other one is running.
|
||||
|
||||
The other key difference between the two scripts is the last argument: **build.sh** passes in the argument **build** while **shell.sh** passes in the argument **shell**. If you look at the [Dockerfile][6] that is used to create the container image, the last line contains the following **ENTRYPOINT**. This means that the **docker container run** invocation above will result in executing the **run_build.sh** script with either **build** or **shell** as the sole input argument.
|
||||
|
||||
|
||||
```
|
||||
# run bash script and process the input command
|
||||
ENTRYPOINT [ "/bin/bash", "/scripts/run_build.sh"]
|
||||
```
|
||||
|
||||
[**run_build.sh**][7] uses this input argument to either start the Bash shell or invoke **gcc** to perform the build of the trivial **helloworld.c** project. A real build system would typically invoke a Makefile and not run **gcc** directly.
|
||||
|
||||
|
||||
```
|
||||
cd /workdir
|
||||
|
||||
if [ $1 = "shell" ]; then
|
||||
echo "Starting Bash Shell"
|
||||
/bin/bash
|
||||
elif [ $1 = "build" ]; then
|
||||
echo "Performing SW Build"
|
||||
gcc helloworld.c -o helloworld -Wall
|
||||
fi
|
||||
```
|
||||
|
||||
You could certainly pass more than one argument if your use case demands it. For the build systems I've dealt with, the build is usually for a given project with a specific **make** invocation. In the case of a build system where the build invocation is complex, you can have **run_build.sh** call a specific script inside **workdir** that the end user has to write.
|
||||
|
||||
### A note about the scripts folder
|
||||
|
||||
You may be wondering why the **scripts** folder is located deep in the tree structure rather than at the top level of the repository. Either approach would work, but I didn't want to encourage the end user to poke around and change things there. Placing it deeper is a way to make it more difficult to poke around. Also, I could have added a **.dockerignore** file to ignore the **scripts** folder, as it doesn't need to be part of the container context. But since it's tiny, I didn't bother.
|
||||
|
||||
### Simple yet flexible
|
||||
|
||||
While the approach is simple, I've used it for a few rather different build systems and found it to be quite flexible. The aspects that are going to be relatively stable (e.g., a given toolset that changes only a few times a year) are fixed inside the container image. The aspects that are more fluid are kept outside the container image as scripts. This allows me to easily modify how the toolset is invoked by updating the script and pushing the changes to the build system repository. All the user needs to do is to pull the changes to their local build system repository, which is typically quite fast (unlike updating a Docker image). The structure lends itself to having as many volumes and scripts as are needed while abstracting the complexity away from the end user.
|
||||
|
||||
How will you need to modify your application to optimize it for a containerized environment?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/how-containerize-build-system
|
||||
|
||||
作者:[Ravi Chandran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ravichandran
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-3-osdc-lead.png?itok=O6aivM_W (Containers on a ship on the ocean)
|
||||
[2]: https://github.com/ravi-chandran/dockerize-tutorial
|
||||
[3]: https://opensource.com/sites/default/files/uploads/build_sys_arch.jpg (Container build system architecture)
|
||||
[4]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/build.sh
|
||||
[5]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/shell.sh
|
||||
[6]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/install_swbuilder.dockerfile
|
||||
[7]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/scripts/run_build.sh
|
@ -1,212 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lxbwolf)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing Git projects with submodules and subtrees)
|
||||
[#]: via: (https://opensource.com/article/20/5/git-submodules-subtrees)
|
||||
[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
|
||||
|
||||
Managing Git projects with submodules and subtrees
|
||||
======
|
||||
Submodules and subtrees help you manage child projects across multiple
|
||||
repositories.
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
If you are into open source development, you have probably worked with Git to manage source code. You might have come across projects with numerous dependencies and/or sub-projects. How do you manage them?
|
||||
|
||||
For an open source organization, it can be tricky to achieve single-source documentation and dependency management for the community _and_ the product. The documentation and project often end up fragmented and redundant, which makes them difficult to maintain.
|
||||
|
||||
### The need
|
||||
|
||||
Suppose you want to use a single project as a child project inside a repository. The traditional method is just to copy the project to the parent repository. But, what if you want to use the same child project in many parent repositories? It wouldn't be feasible to copy the child project into every parent and have to make changes in all of them whenever you update it. This would create redundancy and inconsistency in the parent repositories and make it difficult to update and maintain the child project.
|
||||
|
||||
### Git submodules and subtrees
|
||||
|
||||
What if you could put one project within another using a single command? What if you could just add the project as a child to any number of projects and push changes on the go, whenever you want to? Git provides solutions for this: Git submodules and Git subtrees. These tools were created to support code-sharing development workflows on a more modular level, aspiring to bridge the gap between the Git repository's source-code management (SCM) and the sub-repos within it.
|
||||
|
||||
![Cherry tree growing on a mulberry tree][2]
|
||||
|
||||
Cherry tree growing on a mulberry tree
|
||||
|
||||
This is a real-life scenario of the concepts this article will cover in detail. If you're already familiar with trees, here is what this model will look like:
|
||||
|
||||
![Tree with subtrees][3]
|
||||
|
||||
CC BY-SA opensource.com
|
||||
|
||||
### What are Git submodules?
|
||||
|
||||
Git provides submodules in its default package that enable Git repositories to be nested within other repositories. To be precise, the Git submodule points to a specific commit on the child repository. Here is what Git submodules look like in my [Docs-test][4] GitHub repo:
|
||||
|
||||
![Git submodules screenshot][5]
|
||||
|
||||
The format **[folder@commitId][6]** indicates that the repository is a submodule, and you can directly click on the folder to go to the child repository. The config file called **.gitmodules** contains all the submodule repository details. My repo's **.gitmodules** file looks like this:
|
||||
|
||||
![Screenshot of .gitmodules file][7]
|
||||
|
||||
You can use the following commands to use Git submodules in your repositories.
|
||||
|
||||
#### Clone a repository and load submodules
|
||||
|
||||
To clone a repository containing submodules:
|
||||
|
||||
|
||||
```
|
||||
`$ git clone --recursive <URL to Git repo>`
|
||||
```
|
||||
|
||||
If you have already cloned a repository and want to load its submodules:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --init`
|
||||
```
|
||||
|
||||
If there are nested submodules:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --init --recursive`
|
||||
```
|
||||
|
||||
#### Download submodules
|
||||
|
||||
Downloading submodules sequentially can be a tedious task, so **clone** and **submodule update** will support the **\--jobs** or **-j** parameter.
|
||||
|
||||
For example, to download eight submodules at once, use:
|
||||
|
||||
|
||||
```
|
||||
$ git submodule update --init --recursive -j 8
|
||||
$ git clone --recursive --jobs 8 <URL to Git repo>
|
||||
```
|
||||
|
||||
#### Pull submodules
|
||||
|
||||
Before running or building the parent repository, you have to make sure that the child dependencies are up to date.
|
||||
|
||||
To pull all changes in submodules:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --remote`
|
||||
```
|
||||
|
||||
#### Create repositories with submodules
|
||||
|
||||
To add a child repository to a parent repository:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule add <URL to Git repo>`
|
||||
```
|
||||
|
||||
To initialize an existing Git submodule:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule init`
|
||||
```
|
||||
|
||||
You can also create branches and track commits in your submodules by adding **\--update** to your **submodule update** command:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --remote`
|
||||
```
|
||||
|
||||
#### Update submodule commits
|
||||
|
||||
As explained above, a submodule is a link that points to a specific commit in the child repository. If you want to update the commit of the submodule, don't worry. You don't need to specify the latest commit explicitly. You can just use the general **submodule update** command:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update`
|
||||
```
|
||||
|
||||
Just add and commit as you normally would to create and push the parent repository to GitHub.
|
||||
|
||||
#### Delete a submodule from a parent repository
|
||||
|
||||
Merely deleting a child project folder manually won't remove the child project from the parent repository. To delete a submodule named **childmodule**, use:
|
||||
|
||||
|
||||
```
|
||||
`$ git rm -f childmodule`
|
||||
```
|
||||
|
||||
Although Git submodules may appear easy to work with, it can be difficult for beginners to find their way around them.
|
||||
|
||||
### What are Git subtrees?
|
||||
|
||||
Git subtrees, introduced in Git 1.7.11, allow you to insert a copy of any repository as a subdirectory of another one. It is one of several ways Git projects can inject and manage project dependencies. It stores the external dependencies in regular commits. Git subtrees provide clean integration points, so they're easier to revert.
|
||||
|
||||
If you use the [subtrees tutorial provided by GitHub][8] to use subtrees, you won't see a **.gittrees** config file in your local whenever you add a subtree. This makes it difficult to recognize subtrees because subtrees look like general folders, but they are copies of the child repository. The version of Git subtree with the **.gittrees** config file is not available with the default Git package, so to get the git-subtree with **.gittrees** config file, you must download git-subtree from the [**/contrib/subtree** folder][9] in the Git source repository.
|
||||
|
||||
You can clone any repository containing subtrees, just like any other general repository, but it may take longer because entire copies of the child repository reside in the parent repository.
|
||||
|
||||
You can use the following commands to use Git subtrees in your repositories.
|
||||
|
||||
#### Add a subtree to a parent repository
|
||||
|
||||
To add a new subtree to a parent repository, you first need to **remote add** it and then run the **subtree add** command, like:
|
||||
|
||||
|
||||
```
|
||||
$ git remote add remote-name <URL to Git repo>
|
||||
$ git subtree add --prefix=folder/ remote-name <URL to Git repo> subtree-branchname
|
||||
```
|
||||
|
||||
This merges the whole child project's commit history to the parent repository.
|
||||
|
||||
#### Push and pull changes to and from the subtree
|
||||
|
||||
|
||||
```
|
||||
`$ git subtree push-all`
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
|
||||
```
|
||||
`$ git subtree pull-all`
|
||||
```
|
||||
|
||||
### Which should you use?
|
||||
|
||||
Every tool has pros and cons. Here are some features that may help you decide which is best for your use case.
|
||||
|
||||
* Git submodules have a smaller repository size since they are just links that point to a specific commit in the child project, whereas Git subtrees house the entire child project along with its history.
|
||||
* Git submodules need to be accessible in a server, but subtrees are decentralized.
|
||||
* Git submodules are mostly used in component-based development, whereas Git subtrees are used in system-based development.
|
||||
|
||||
|
||||
|
||||
A Git subtree isn't a direct alternative to a Git submodule. There are certain caveats that guide where each can be used. If there is an external repository you own and are likely to push code back to, use Git submodule since it is easier to push. If you have third-party code that you are unlikely to push to, use Git subtree since it is easier to pull.
|
||||
|
||||
Give Git subtrees and submodules a try and let me know how it goes in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/git-submodules-subtrees
|
||||
|
||||
作者:[Manaswini Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/manaswinidas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/640px-bialbero_di_casorzo.jpg (Cherry tree growing on a mulberry tree)
|
||||
[3]: https://opensource.com/sites/default/files/subtree_0.png (Tree with subtrees)
|
||||
[4]: https://github.com/manaswinidas/Docs-test/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/git-submodules_github.png (Git submodules screenshot)
|
||||
[6]: mailto:folder@commitId
|
||||
[7]: https://opensource.com/sites/default/files/uploads/gitmodules.png (Screenshot of .gitmodules file)
|
||||
[8]: https://help.github.com/en/github/using-git/about-git-subtree-merges
|
||||
[9]: https://github.com/git/git/tree/master/contrib/subtree
|
@ -1,71 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-studio-opts-for-kde/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment
|
||||
======
|
||||
|
||||
[Ubuntu Studio][1] is a popular [official flavour of Ubuntu][2] tailored for creative content creators involved in audio production, video, graphics, photography, and book publishing. It offers a lot of multimedia content creation applications out of the box with the best possible experience.
|
||||
|
||||
After the recent 20.04 LTS release, the Ubuntu Studio team highlighted something very important in their [official announcement][3]. And, probably not everyone noticed the key information i.e Ubuntu Studio’s future.
|
||||
|
||||
Ubuntu Studio 20.04 will be the last version to ship with the [Xfce desktop environment][4]. All the future releases will be using [KDE Plasma][5] instead.
|
||||
|
||||
### Why is Ubuntu Studio ditching XFCE?
|
||||
|
||||
![][6]
|
||||
|
||||
As per their clarification, Ubuntu Studio isn’t focused on any particular look/feel but aims to provide the best user experience possible. And, KDE proves to be a better option.
|
||||
|
||||
> Plasma has proven to have better tools for graphics artists and photographers, as can be seen in Gwenview, Krita, and even the file manager Dolphin. Additionally, it has Wacom tablet support better than any other desktop environment.
|
||||
|
||||
> It has become so good that the majority of the Ubuntu Studio team is now using Kubuntu with Ubuntu Studio added-on via Ubuntu Studio Installer as their daily driver. With so many of us using Plasma, the timing just seems right to focus on a transition to Plasma with our next release.
|
||||
|
||||
Of course every desktop environment has been tailored for something different. And, here, they think that KDE Plasma will be the most suitable desktop environment replacing XFCE to provide a better user experience to all the users.
|
||||
|
||||
While I’m not sure how the users will react to this as every user has a different set of preferences. If the existing users won’t have a problem with KDE, it isn’t going to be a big deal.
|
||||
|
||||
It is worth noting that Ubuntu Studio also mentioned why KDE is potentially a superior choice for them:
|
||||
|
||||
> The Plasma desktop environment has, without Akonadi, become just as light in resource usage as Xfce, perhaps even lighter. Other audio-focused Linux distributions, such as Fedora Jam and KXStudio, have historically used the KDE Plasma desktop environment and done well with the audio.
|
||||
|
||||
Also, they’ve highlighted [an article by Jason Evangelho at Forbes][7] where some benchmarks reveal that KDE is almost as light as Xfce. Even though that’s a good sign – we still have to wait for the users to test-drive the KDE-powered Ubuntu Studio. Only then we’ll be able to observe whether Ubuntu Studio’s decision to ditch XFCE desktop environment was the right thing to do.
|
||||
|
||||
### What will change for Ubuntu Studio users after this change?
|
||||
|
||||
The overall workflow may get affected (or improve) moving forward with KDE on Ubuntu Studio 20.10 and later.
|
||||
|
||||
However, the upgrade process (from 20.04 to 20.10) will result in broken systems. So, a fresh install of Ubuntu Studio 20.10 or later versions will be the only way to go.
|
||||
|
||||
They’ve also mentioned that they will be constantly evaluating for any duplication with the pre-installed apps. So, I believe more details will follow in the coming days.
|
||||
|
||||
Ubuntu Studio is second distribution that has changed its main desktop environment in recent times. Earlier, [Lubuntu][8] switched to LXQt from LXDE.
|
||||
|
||||
What do you think about this change? Feel free to share your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-studio-opts-for-kde/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://ubuntustudio.org/
|
||||
[2]: https://itsfoss.com/which-ubuntu-install/
|
||||
[3]: https://ubuntustudio.org/2020/04/ubuntu-studio-20-04-lts-released/
|
||||
[4]: https://xfce.org
|
||||
[5]: https://kde.org/plasma-desktop
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-studio-kde-xfce.jpg?ssl=1
|
||||
[7]: https://www.forbes.com/sites/jasonevangelho/2019/10/23/bold-prediction-kde-will-steal-the-lightweight-linux-desktop-crown-in-2020
|
||||
[8]: https://itsfoss.com/lubuntu-20-04-review/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,164 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ensmallening Go binaries by prohibiting comparisons)
|
||||
[#]: via: (https://dave.cheney.net/2020/05/09/ensmallening-go-binaries-by-prohibiting-comparisons)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Ensmallening Go binaries by prohibiting comparisons
|
||||
======
|
||||
|
||||
Conventional wisdom dictates that the larger the number of types declared in a Go program, the larger the resulting binary. Intuitively this makes sense, after all, what’s the point in defining a bunch of types if you’re not going to write code that operates on them. However, part of the job of a linker is to detect functions which are not referenced by a program–say they are part of a library of which only a subset of functionality is used–and remove them from the final output. Yet, the adage mo’ types, mo’ binary holds true for the majority of Go programs.
|
||||
|
||||
In this post I’ll dig into what equality, in the context of a Go program, means and why changes [like this][1] have a measurable impact on the size of a Go program.
|
||||
|
||||
### Defining equality between two values
|
||||
|
||||
The Go spec defines the concepts of assignability and equality. Assignabiity is the act of assigning a value to an identifier. Not everything which is declared can be assigned, for example constants and functions. Equality is the act of comparing two identifies by asking _are their contents the same?_
|
||||
|
||||
Being a strongly typed language, the notion of sameness is fundamentally rooted in the identifier’s type. Two things can only be the same if they are of the same type. Beyond that, the type of the values defines how they are compared.
|
||||
|
||||
For example, integers are compared arithmetically. For pointer types, equality is determining if the addresses they point too are the same. Reference types like maps and channels, like pointers, are considered to be the same if they have the same address.
|
||||
|
||||
These are all examples of bitwise equality, that is, if the bit patterns of the memory that value occupies are the same, those values are equal. This is known as memcmp, short for memory comparison, as equality is defined by comparing the contents of two areas of memory.
|
||||
|
||||
Hold on to this idea, I’ll come back to in a second.
|
||||
|
||||
### Struct equality
|
||||
|
||||
Beyond scalar types like integers, floats, and pointers is the realm of compound types; structs. All structs are laid out in memory in program order, thus this declaration:
|
||||
|
||||
```
|
||||
type S struct {
|
||||
a, b, c, d int64
|
||||
}
|
||||
```
|
||||
|
||||
will consume 32 bytes of memory; 8 bytes for `a`, then 8 bytes for `b`, and so on. The spec says that _struct values are comparable if all their fields are comparable_. Thus two structs are equal iff each of their fields are equal.
|
||||
|
||||
```
|
||||
a := S{1, 2, 3, 4}
|
||||
b := S{1, 2, 3, 4}
|
||||
fmt.Println(a == b) // prints true
|
||||
```
|
||||
|
||||
Under the hood the compiler uses memcmp to compare the 32 bytes of `a` and `b`.
|
||||
|
||||
### Padding and alignment
|
||||
|
||||
However the simplistic bitwise comparison strategy will fail in situations like this:
|
||||
|
||||
```
|
||||
type S struct {
|
||||
a byte
|
||||
b uint64
|
||||
c int16
|
||||
d uint32
|
||||
}
|
||||
|
||||
func main()
|
||||
a := S{1, 2, 3, 4}
|
||||
b := S{1, 2, 3, 4}
|
||||
fmt.Println(a == b) // prints true
|
||||
}
|
||||
```
|
||||
|
||||
The code compiles, the comparison is still true, but under the hood the compiler cannot rely on comparing the bit patterns of `a` and `b` because the structure contains _padding_.
|
||||
|
||||
Go requires each field in a struct to be naturally aligned. 2 byte values must start on an even address, four byte values on an address divisible by 4, and so on[1][2]. The compiler inserts padding to ensure the fields are _aligned_ to according to their type and the underlying platform. In effect, after padding, this is what the compiler sees[2][3]:
|
||||
|
||||
```
|
||||
type S struct {
|
||||
a byte
|
||||
_ [7]byte // padding
|
||||
b uint64
|
||||
c int16
|
||||
_ [2]int16 // padding
|
||||
d uint32
|
||||
}
|
||||
```
|
||||
|
||||
Padding exists to ensure the correct field alignments, and while it does take up space in memory, the contents of those padding bytes are unknown. You might assume that, being Go, the padding bytes are always zero, but it turns out that’s not the case–the contents of padding bytes are simply not defined. Because they’re not defined to always be a certain value, doing a bitwise comparison may return false because the nine bytes of padding spread throughout the 24 bytes of `S` are may not be the same.
|
||||
|
||||
The Go compiler solves this problem by generating what is known as an equality function. In this case `S`‘s equality function knows how to compare two values of type `S` by comparing only the fields in the function while skipping over the padding.
|
||||
|
||||
### Type algorithms
|
||||
|
||||
Phew, that was a lot of setup to illustrate why, for each type defined in a Go program, the compiler may generate several supporting functions, known inside the compiler as the type’s algorithms. In addition to the equality function the compiler will generate a hash function if the type is used as a map key. Like the equality function, the hash function must consider factors like padding when computing its result to ensure it remains stable.
|
||||
|
||||
It turns out that it can be hard, and sometimes non obvious, to intuit when the compiler will generate these functions–it’s more than you’d expect–and it can be hard for the linker to eliminate the ones that are not needed as reflection often causes the linker to be more conservative when trimming types.
|
||||
|
||||
### Reducing binary size by prohibiting comparisons
|
||||
|
||||
Now we’re at a point to explain Brad’s change. By adding an incomparable field [3][4] to the type, the resulting struct is by extension incomparable, thus forcing the compiler to elide the generation of eq and hash algorithms, short circuiting the linkers elimination of those types and, in practice, reducing the size of the final binary. As an example of this technique, this program:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
type t struct {
|
||||
// _ [0][]byte uncomment to prevent comparison
|
||||
a byte
|
||||
b uint16
|
||||
c int32
|
||||
d uint64
|
||||
}
|
||||
var a t
|
||||
fmt.Println(a)
|
||||
}
|
||||
```
|
||||
|
||||
when compiled with Go 1.14.2 (darwin/amd64), decreased from 2174088 to 2174056, a saving of 32 bytes. In isolation this 32 byte saving may seem like small beer, but consider that equality and hash functions can be generated for every type in the transitive closure of your program and all its dependencies, and the size of these functions varies depending on the size of the type and its complexity, prohibiting them can have a sizeable impact on the final binary over and above the old saw of `-ldflags="-s -w"`.
|
||||
|
||||
The bottom line, if you don’t wish to make your types comparable, a hack like this enforces it at the source level while contributing to a small reduction in the size of your binary.
|
||||
|
||||
* * *
|
||||
|
||||
Addendum: thanks to Brad’s prodding, Go 1.15 already has a bunch of improvements by [Cherry Zhang][5] and [Keith Randall][6] that fix the most egregious of the failures to eliminate unnecessary equality and hash functions (although I suspect it was also to avoid the proliferation of this class of CLs).
|
||||
|
||||
1. On 32bit platforms `int64` and `uint64` values may not be 8 byte aligned as the natural alignment of the platform is 4 bytes. See [issue 599][7] for the gory details.[][8]
|
||||
2. 32 bit platforms would see `_ [3]byte` padding between the declaration of `a` and `b`. See previous.[][9]
|
||||
3. Brad used `[0]func()`, but any type that the spec limits or prohibits comparisons on will do. By declaring the array has zero elements the type has no impact on the size or alignment of the struct.[][10]
|
||||
|
||||
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [How the Go runtime implements maps efficiently (without generics)][11]
|
||||
2. [The empty struct][12]
|
||||
3. [Padding is hard][13]
|
||||
4. [Typed nils in Go 2][14]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2020/05/09/ensmallening-go-binaries-by-prohibiting-comparisons
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/golang/net/commit/e0ff5e5a1de5b859e2d48a2830d7933b3ab5b75f
|
||||
[2]: tmp.uBLyaVR1Hm#easy-footnote-bottom-1-4116 (On 32bit platforms <code>int64</code> and <code>uint64</code> values may not be 8 byte aligned as the natural alignment of the platform is 4 bytes. See <a href="https://github.com/golang/go/issues/599">issue 599</a> for the gory details.)
|
||||
[3]: tmp.uBLyaVR1Hm#easy-footnote-bottom-2-4116 (32 bit platforms would see <code>_ [3]byte</code> padding between the declaration of <code>a</code> and <code>b</code>. See previous.)
|
||||
[4]: tmp.uBLyaVR1Hm#easy-footnote-bottom-3-4116 (Brad used <code>[0]func()</code>, but any type that the spec limits or prohibits comparisons on will do. By declaring the array has zero elements the type has no impact on the size or alignment of the struct.)
|
||||
[5]: https://go-review.googlesource.com/c/go/+/231397
|
||||
[6]: https://go-review.googlesource.com/c/go/+/191198
|
||||
[7]: https://github.com/golang/go/issues/599
|
||||
[8]: tmp.uBLyaVR1Hm#easy-footnote-1-4116
|
||||
[9]: tmp.uBLyaVR1Hm#easy-footnote-2-4116
|
||||
[10]: tmp.uBLyaVR1Hm#easy-footnote-3-4116
|
||||
[11]: https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics (How the Go runtime implements maps efficiently (without generics))
|
||||
[12]: https://dave.cheney.net/2014/03/25/the-empty-struct (The empty struct)
|
||||
[13]: https://dave.cheney.net/2015/10/09/padding-is-hard (Padding is hard)
|
||||
[14]: https://dave.cheney.net/2017/08/09/typed-nils-in-go-2 (Typed nils in Go 2)
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open source underpins coronavirus IoT and robotics solutions)
|
||||
[#]: via: (https://opensource.com/article/20/5/robotics-covid19)
|
||||
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
|
||||
|
||||
Open source underpins coronavirus IoT and robotics solutions
|
||||
======
|
||||
From sanitization of equipment and facilities to plotting the spread of
|
||||
the virus, robots are playing an active role in combating COVID-19.
|
||||
![Three giant robots and a person][1]
|
||||
|
||||
The tech sector is quietly having a boom during the COVID-19 pandemic. Open source developers are getting involved with many aspects of the fight against the coronavirus, [using Python to visualize its spread][2] and helping to repurpose data acquisition systems to perform contact tracing.
|
||||
|
||||
However, one of the most exciting areas of current research is the use of robotics to contain the spread of the coronavirus. In the last few weeks, robots have been deployed in critical environments—particularly in hospitals and on airplanes—to help staff sterilize surfaces and objects.
|
||||
|
||||
Most of these robots are produced by tech startups, who have seen an opportunity to prove the worth of their proprietary systems. Many of them, however, rely on [open source cloud and IoT tools][3] that have been developed by the open source community.
|
||||
|
||||
In this article, we'll take a look at how robotics are being used to fight the disease, the IoT infrastructure that underpins these systems, and finally, the security and privacy concerns that their increased use is highlighting.
|
||||
|
||||
### Robots and COVID-19
|
||||
|
||||
Around the world, robots are being deployed to help the fight against COVID-19. The most direct use of robots has been in healthcare facilities, and China has taken the lead when it comes to deploying robots in hospitals.
|
||||
|
||||
For example, a field hospital that recently opened in Wuhan—where the virus originated—is [making extensive use of robots][4] to help healthcare workers care for patients. Some of these robots provide food, drink, and medicine to patients, and others are used to clean parts of the hospital.
|
||||
|
||||
Other companies, such as the Texas startup Xenex Disinfection Services, are using robots and UV light to deactivate viruses, bacteria, and spores on surfaces in airports. Still others, like Dimer UVC Innovations, are focusing on making robots that can [improve aircraft hygiene][5].
|
||||
|
||||
Not all of the "robots" deployed against the disease are anthropomorphic, though. The same field hospital in Wuhan that is using human-like robots is also making extensive use of less obviously "robotic" IoT devices.
|
||||
|
||||
Patients entering the hospital are screened by networked 5G thermometers to alert staff for anyone showing a high fever, and patients wear smart bracelets and rings equipped with sensors. These are synced with CloudMinds' AI platform, and patients' vital signs, including temperature, heart rate, and blood oxygen levels, can be monitored.
|
||||
|
||||
### Robots and the IoT
|
||||
|
||||
Even when these robots appear to be independent entities, they make [extensive use of the IoT][6]. In other words, although patients may feel that they are being cared for by a robot that can make its own decisions, in reality, these robots are controlled by large, distributed sensing and data processing systems.
|
||||
|
||||
Although many of the robots being deployed are the proprietary property of the tech firms who produce their hardware, their functioning is based on an ecosystem of software that is largely open source.
|
||||
|
||||
This observation is an important one because it overturns one of the primary misconceptions about the [way that AI is used today][7][,][7] whether in a healthcare setting or elsewhere. Most research into robotics today does not seek to embed fully intelligent AI systems into robots themselves but, instead, uses centralized AI systems to control a wide variety of far less "smart" IoT devices.
|
||||
|
||||
This observation, in turn, highlights two key points about the robots currently being developed and used to fight COVID-19. One is that they rely on a software ecosystem—much of it open source—that has been developed in a truly collaborative process involving thousands of engineers. The second is that the networked nature of these robots makes them vulnerable to exploitation.
|
||||
|
||||
### Security and privacy
|
||||
|
||||
This vulnerability to cybersecurity threats has led some analysts to raise questions about the wisdom of widespread deployment of IoT-driven robotics, whether in the healthcare system or anywhere else. Spyware in the IoT [remains a huge problem][8], and some fear that by integrating IoT systems into healthcare, we may be exposing more data—and more sensitive data—to intruders.
|
||||
|
||||
Even where developers are careful to build security into these devices, the sheer number of components they rely on makes DevSecOps processes difficult to implement. Especially in this current time of crisis, many software engineers have been forced to accelerate the release of new components, and this could lead to them being vulnerable. If a company is rushing to bring a healthcare robot onto the market in response to COVID-19, it's unlikely that the open source code that these devices run on will be [properly audited][9].
|
||||
|
||||
And even if companies are able to maintain the integrity of their DevSecOps processes while still accelerating development, it's far from certain that patients themselves understand the privacy implications of delegating their care to IoT devices. Many lack the open source privacy tools [necessary to keep their data private][10] when browsing the internet, let alone those that should be deployed to protect sensitive healthcare data.
|
||||
|
||||
### The future
|
||||
|
||||
In short, the deployment of robots in the fight against COVID-19 is highlighting long-standing concerns about the integrity, security, and privacy of IoT systems more generally. Professionals in this field have long argued that [IoT audits][11] and [embedded Linux systems][12] should be the standard for IoT development, but in the current crisis, their warnings are likely to be ignored.
|
||||
|
||||
This is worrying because it's likely that IoT systems will be increasingly used in healthcare in the coming decade. So whilst the COVID-19 pandemic will provide a proof of their utility in this sector, it should also not be used as an excuse to roll out poorly secured, poorly audited IoT software in highly sensitive environments.
|
||||
|
||||
Open source isn’t just changing the way we interact with the world, it’s changing the way the world...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/robotics-covid19
|
||||
|
||||
作者:[Sam Bocetta][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sambocetta
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_robots.png?itok=TOZgajrd (Three giant robots and a person)
|
||||
[2]: https://opensource.com/article/20/4/python-data-covid-19
|
||||
[3]: https://opensource.com/article/18/7/digital-transformation-strategy-think-cloud
|
||||
[4]: https://www.cnbc.com/2020/03/18/how-china-is-using-robots-and-telemedicine-to-combat-the-coronavirus.html
|
||||
[5]: https://www.therobotreport.com/company-offers-germ-killing-robot-to-airports-to-address-coronavirus-outbreak/
|
||||
[6]: https://www.cloudwards.net/what-is-the-internet-of-things/
|
||||
[7]: https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
[8]: https://blog.eccouncil.org/spyware-in-the-iot-what-does-it-mean-for-your-online-privacy/
|
||||
[9]: https://opensource.com/article/17/10/doc-audits
|
||||
[10]: https://privacyaustralia.net/privacy-tools/
|
||||
[11]: https://opensource.com/article/19/11/how-many-iot-devices
|
||||
[12]: https://opensource.com/article/17/3/embedded-linux-iot-ecosystem
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Can’t Install Deb File on Ubuntu 20.04? Here’s What You Need to do!)
|
||||
[#]: via: (https://itsfoss.com/cant-install-deb-file-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Can’t Install Deb File on Ubuntu 20.04? Here’s What You Need to do!
|
||||
======
|
||||
|
||||
_**Brief: Double clicking on the deb file doesn’t install it via the software center in Ubuntu 20.04? You are not the only one facing this issue. This tutorial shows how to fix it.**_
|
||||
|
||||
On the “[things to do after installing Ubuntu 20.04][1]” article, a few readers mentioned that they had trouble [installing software from the Deb file][2].
|
||||
|
||||
I found that strange because installing a program using the deb file is one of the simplest methods. All you have to do is to double click the downloaded file and it opens (by default) with the Software Center program. You click on install, it asks for your password and within a few seconds/minute, the software is installed.
|
||||
|
||||
I had [upgraded to Ubuntu 20.04 from 19.10][3] and hadn’t faced this issue with it until today.
|
||||
|
||||
I downloaded the .deb file for installing [Rocket Chat messenger][4] and when I double clicked on it to install this software, the file was opened with the archive manager. This is not what I expected.
|
||||
|
||||
![DEB files opened with Archive Manager instead of Software Center][5]
|
||||
|
||||
The “fix” is simple and I am going to show it to you in this quick tutorial.
|
||||
|
||||
### Installing deb files in Ubuntu 20.04
|
||||
|
||||
For some reasons, the default software to open the deb file has been set to Archive Manager tool in Ubuntu 20.04. The Archive Manager tool is used for [extract zip][6] and other compressed files.
|
||||
|
||||
The solution for this problem is pretty simple. You [change the default application in Ubuntu][7] for opening DEB files from Archive Manager to Software Install. Let me show you the steps.
|
||||
|
||||
**Step 1:** Right click on the downloaded DEB file and select **Properties**:
|
||||
|
||||
![][8]
|
||||
|
||||
**Step 2:** Go to “**Open With**” tab, select “**Software Install**” app and click on “**Set as default**“.
|
||||
|
||||
![][9]
|
||||
|
||||
This way, all the deb files in the future will be opened with Software Install i.e. the software center applications.
|
||||
|
||||
Confirm it by double clicking the DEB file and see if it open with the software center application or not.
|
||||
|
||||
#### Ignorant bug or stupid feature?
|
||||
|
||||
Why is deb files are supposed to be opened with Archive Manager is beyond comprehension. I do hope that this is a bug, not a weird feature like [not allowing drag and drop files on the desktop in Ubuntu 20.04][10].
|
||||
|
||||
Since we are discussing deb file installation, let me tell you about a nifty tool [gdebi][11]. It’s a lightweight application with the sole purpose of installing DEB file. Not always but some times, it can also handle the dependencies.
|
||||
|
||||
You can learn more about [using gdebi and making it default for installing deb files here][12].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/cant-install-deb-file-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/
|
||||
[2]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[3]: https://itsfoss.com/upgrade-ubuntu-version/
|
||||
[4]: https://rocket.chat/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/error-opening-deb-file.png?ssl=1
|
||||
[6]: https://itsfoss.com/unzip-linux/
|
||||
[7]: https://itsfoss.com/change-default-applications-ubuntu/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/open-deb-files.png?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/deb-file-install-fix-ubuntu.png?fit=800%2C454&ssl=1
|
||||
[10]: https://itsfoss.com/add-files-on-desktop-ubuntu/
|
||||
[11]: https://launchpad.net/gdebi
|
||||
[12]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
|
163
translated/tech/20200414 How I containerize a build system.md
Normal file
163
translated/tech/20200414 How I containerize a build system.md
Normal file
@ -0,0 +1,163 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LazyWolfLin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I containerize a build system)
|
||||
[#]: via: (https://opensource.com/article/20/4/how-containerize-build-system)
|
||||
[#]: author: (Ravi Chandran https://opensource.com/users/ravichandran)
|
||||
|
||||
构建系统容器化指南
|
||||
======
|
||||
搭建一个通过容器分发应用的可复用系统可能很复杂,但这儿有个好方法。
|
||||
![Containers on a ship on the ocean][1]
|
||||
|
||||
一个用于将源代码编译成可运行的应用的构建系统是由工具和流程共同组成。在编译过程中还涉及到代码从软件开发者流转到最终用户,无论最终用户是运维的同事还是部署的同事。
|
||||
|
||||
在使用容器搭建了一些构建系统后,我觉得有一个不错的可复用的方法值得分享。虽然这些构建系统被用于编译机器学习算法和为嵌入式硬件生成可加载的软件镜像上,但这个方法足够抽象,可用于任何基于容器的构建系统。
|
||||
|
||||
这个方法是关于通过简单和可维护的方式搭建或组织构建系统,但并不涉及处理特定编译器或工具容器化的技巧。它适用于软件开发人员构建软件并将可维护镜像交给其他技术人员(无论是系统管理员,运维工程师或者其他头衔)的常见情况。由于构建系统对于最终用户是透明的,因此他们能够专注于软件本身。
|
||||
|
||||
### 为什么要容器化构建系统?
|
||||
|
||||
搭建基于容器的可复用构建系统可以为软件团队带来诸多好处:
|
||||
|
||||
* **专注**:我希望专注于应用的开发。当我调用一个名为“build”的工具时,我希望这个工具集能生成一个随时可用的二进制文件。我不想浪费时间在构建系统的查错上。实际上,我宁愿不了解也不关心构建系统。
|
||||
* **一致的构建行为**:无论在哪种使用情况下,我都想确保整个团队使用相同版本的工具集并在构建时得到相同的结果。否则,我就得不断地处理“我这咋就是好的”的麻烦。在团队项目中,使用相同版本的工具集并对给定的输入源文件集产生一致的输出是非常重要。
|
||||
* **易于部署和升级**:即使向每个人都提供一套详细说明来为项目安装工具集,也可能会有人翻车。问题可能是由于每个人对自己的 Linux 环境的个性化修改导致的。在团队中使用不同的 Linux 发行版(或者其他操作系统),情况可能还会变得更复杂。当需要将工具集升级到下一版本时,问题很快就会变得更糟糕。使用容器和本指南将使得新版本升级非常简单。
|
||||
|
||||
我在项目中容器化构建系统的经验显然很有价值,因为它可以缓解上述问题。我倾向于使用 Docker 作为容器工具,虽然在相对特殊的环境中安装和网络配置仍可能出现问题,尤其是当你在一个使用复杂代理的企业环境中工作时。但至少现在我需要解决的构建系统问题已经很少了。
|
||||
|
||||
### 漫步容器化的构建系统
|
||||
|
||||
我创建了一个[教程存储库][2],随后你可以 clone 并检查它,或者按照本文内容进行操作。我将逐个介绍存储库中的文件。这个构建系统非常简单(它使用**gcc**)从而可以专注于构建系统结构上。
|
||||
|
||||
### 构建系统需求
|
||||
|
||||
我认为构建系统中有两个关键点:
|
||||
|
||||
* **标准化构建调用**:我希望能够指定一些形如 **/path/to/workdir** 的工作目录来构建代码。我希望以如下形式调用构建:
|
||||
|
||||
./build.sh /path/to/workdir
|
||||
|
||||
为了使得示例的结构足够简单(以便说明),我将假定输出也在 **/path/to/workdir** 路径下的某处生成。(否则,将增加容器中显示的卷的数量,虽然这并不困难,但解释起来比较麻烦。)
|
||||
* **通过 shell 自定义构建调用**:有时,工具集会以出乎意料的方式被调用。除了标准的工具集调用 **build.sh** 之外,如果需要还可以为 **build.sh** 添加一些选项。但我一直希望能够有一个可以直接调用工具集命令的 shell。在这个简单的示例中,有时我想尝试不同的 **gcc** 优化选项并查看效果。为此,我希望调用:
|
||||
|
||||
./shell.sh /path/to/workdir
|
||||
|
||||
这将让我得到一个容器内部的 Bash shell,并且可以调用工具集和访问我的**工作目录 workdir**,从而我可以根据需要尝试使用这个工具集。
|
||||
|
||||
### 构建系统架构
|
||||
|
||||
为了满足上述基本需求,这是我的构架系统架构:
|
||||
|
||||
![Container build system architecture][3]
|
||||
|
||||
在底部的 **workdir** 代表软件开发者用于构建的任意软件源码。通常,这个 **workdir** 是一个源代码的存储库。在构建之前,最终用户可以通过任何方式来操纵这个存储库。例如,如果他们使用 **git** 作为版本控制工具的话,可以使用 **git checkout** 切换到他们正在工作的功能分支上并添加或修改文件。这样可以使得构建系统独立于 **workdir** 之外。
|
||||
|
||||
顶部的三个模块共同代表了容器化的构建系统。最左边的黄色模块代表最终用户与构建系统交互的脚本(**build.sh** 和 **shell.sh**)。
|
||||
|
||||
在中间的红色模块是 Dockerfile 和相关的脚本 **build_docker_image.sh**。开发者(在这个例子中指我)通常将执行这个脚本并生成容器镜像(事实上我多次执行它直到一切正常为止,但这是另一个故事)。然后我将镜像分发给最终用户,例如通过 container trusted registry 进行分发。最终用户将需要这个镜像。另外,他们将 clone 构建系统存储库(即一个与[教程存储库][2]等效的存储库)。
|
||||
|
||||
当最终用户调用 **build.sh** 或者 **shell.sh** 时,容器内将执行右边的 **run_build.sh** 脚本。接下来我将详细解释这些脚本。这里的关键是最终用户不需要为了使用而去了解任何关于红色或者蓝色模块或者容器工作原理的知识。
|
||||
|
||||
### 构建系统细节
|
||||
|
||||
把教程存储库的文件结构映射到这个系统结构上。我曾将这个原型结构用于相对复杂构建系统,因此它的简单并不会造成任何限制。下面我列出存储库中相关文件的树结构。文件夹 **dockerize-tutorial** 能用构建系统的其他任何名称代替。在这个文件夹下,我用 **workdir** 的路径作参数调用 **build.sh** 或 **shell.sh**。
|
||||
|
||||
```
|
||||
dockerize-tutorial/
|
||||
├── build.sh
|
||||
├── shell.sh
|
||||
└── swbuilder
|
||||
├── build_docker_image.sh
|
||||
├── install_swbuilder.dockerfile
|
||||
└── scripts
|
||||
└── run_build.sh
|
||||
```
|
||||
|
||||
请注意,我上面特意没列出 **example_workdir**,你能在教程存储库中找到。实际的源码通常存放在单独的存储库中,而不是构建工具库中的一部分;本教程为了不必处理两个存储库,所以我将它包含在这个存储库中。
|
||||
|
||||
如果你只对概念感兴趣,本教程并非必须的,因为我将解释所有文件。但是如果你继续本教程(并且已经安装 Docker),首先使用以下命令来构建容器镜像 **swbuilder:v1**:
|
||||
|
||||
```
|
||||
cd dockerize-tutorial/swbuilder/
|
||||
./build_docker_image.sh
|
||||
docker image ls # resulting image will be swbuilder:v1
|
||||
```
|
||||
|
||||
然后调用 **build.sh**:
|
||||
|
||||
```
|
||||
cd dockerize-tutorial
|
||||
./build.sh ~/repos/dockerize-tutorial/example_workdir
|
||||
```
|
||||
|
||||
下面是 [build.sh][4] 的代码。这个脚本从容器镜像 **swbuilder:v1** 实例化一个容器。而这个容器实例映射了两个卷:一个将文件夹 **example_workdir** 挂载到容器内部路径 **/workdir** 上,第二个则将容器外的文件夹 **dockerize-tutorial/swbuilder/scripts** 挂载到容器内部路径 **/scripts** 上。
|
||||
|
||||
```
|
||||
docker container run \
|
||||
--volume $(pwd)/swbuilder/scripts:/scripts \
|
||||
--volume $1:/workdir \
|
||||
--user $(id -u ${USER}):$(id -g ${USER}) \
|
||||
--rm -it --name build_swbuilder swbuilder:v1 \
|
||||
build
|
||||
```
|
||||
|
||||
另外,**build.sh** 还会用你的用户名(以及组,本教程假设两者一致)去运行容器,以便在访问构建输出时不出现文件权限问题。
|
||||
|
||||
请注意,[**shell.sh**][5] 和 **build.sh** 大体上是一致的,除了两点不同:**build.sh** 会创建一个名为 **build_swbuilder** 的容器,而 **shell.sh** 则会创建一个名为 **shell_swbuilder** 的容器。这样一来,当其中一个脚本运行时另一个脚本被调用也不会产生冲突。
|
||||
|
||||
两个脚本之间的另一处关键不同则在于最后一个参数:**build.sh** 传入参数 **build** 而 **shell.sh** 则传入 **shell**。如果你看了用于构建容器镜像的 [Dockerfile][6],就会发现最后一行包含了下面的 **ENTRYPOINT** 语句。这意味着上面的 **docker container run** 调用将使用 **build** 或 **shell** 作为唯一的输入参数来执行 **run_build.sh** 脚本。
|
||||
|
||||
```
|
||||
# run bash script and process the input command
|
||||
ENTRYPOINT [ "/bin/bash", "/scripts/run_build.sh"]
|
||||
```
|
||||
|
||||
[**run_build.sh**][7] 使用这个输入参数来选择启动 Bash shell 还是调用 **gcc** 来构建 **helloworld.c** 项目。一个真正的构建系统通常会使用 Makefile 而非直接运行 **gcc**。
|
||||
|
||||
```
|
||||
cd /workdir
|
||||
|
||||
if [ $1 = "shell" ]; then
|
||||
echo "Starting Bash Shell"
|
||||
/bin/bash
|
||||
elif [ $1 = "build" ]; then
|
||||
echo "Performing SW Build"
|
||||
gcc helloworld.c -o helloworld -Wall
|
||||
fi
|
||||
```
|
||||
|
||||
在使用时,如果你需要传入多个参数,当然也是可以的。我处理过的构建系统,构建通常是对给定的项目调用 **make**。如果一个构建系统有非常复杂的构建调用,则你可以让 **run_build.sh** 调用 **workdir** 下最终用户编写的特定脚本。
|
||||
|
||||
### 关于 scripts 文件夹的说明
|
||||
|
||||
你可能想知道为什么 **scripts** 文件夹位于目录树深处而不是位于存储库的顶层。两种方法都是可行的,但我不想鼓励最终用户到处乱翻并修改里面的脚本。将它放到更深的地方是一个让他们更难乱翻的方法。另外,我也可以添加一个 **.dockerignore** 文件去忽略 **scripts** 文件夹,因为它不是容器必需的部分。但因为它很小,所以我没有这样做。
|
||||
|
||||
### 简单而灵活
|
||||
|
||||
尽管这一方法很简单,但我将其用于某些非常特殊的构建系统时,发现它其实非常灵活。相对稳定的部分(例如,一年仅修改数次的给定工具集)被固定在容器镜像内。较为灵活的部分则以脚本的形式放在镜像外。这使我能够简单地通过修改脚本并将更改推送到构建系统存储库来修改调用工具集的方式。用户所需要做的是将更改拉到本地的构建系统存储库中,这通常是非常快的(与更新 Docker 镜像不同)。这种结构使其能够拥有尽可能多的卷和脚本,同时使最终用户摆脱复杂性。
|
||||
|
||||
你将如何修改你的应用来针对容器化环境进行优化呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/how-containerize-build-system
|
||||
|
||||
作者:[Ravi Chandran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/LazyWolfLin)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ravichandran
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-3-osdc-lead.png?itok=O6aivM_W (Containers on a ship on the ocean)
|
||||
[2]: https://github.com/ravi-chandran/dockerize-tutorial
|
||||
[3]: https://opensource.com/sites/default/files/uploads/build_sys_arch.jpg (Container build system architecture)
|
||||
[4]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/build.sh
|
||||
[5]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/shell.sh
|
||||
[6]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/install_swbuilder.dockerfile
|
||||
[7]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/scripts/run_build.sh
|
@ -0,0 +1,209 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lxbwolf)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing Git projects with submodules and subtrees)
|
||||
[#]: via: (https://opensource.com/article/20/5/git-submodules-subtrees)
|
||||
[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
|
||||
|
||||
使用子模块和子仓库管理 Git 项目
|
||||
======
|
||||
使用子模块和子仓库来帮助你管理多个仓库中共有的子项目。
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
如果你参与了开源项目的开发,那么你可能会用 Git 来管理你的源码。你可能遇到过有很多依赖和/或子项目的项目。你怎么管理它们?
|
||||
|
||||
对于一个开源组织,社区*和*产品想要实现单一来源文档和依赖管理比较棘手。文档和项目往往会碎片化和变得冗余,这致使它们很难维护。
|
||||
|
||||
### 必要性
|
||||
|
||||
假设你想要在一个仓库中把一个项目作为子项目来用。传统的方法是把该项目复制到父仓库中。但是,如果你想要在多个父项目中用同一个子项目呢?(如果)把子项目复制到所有父项目中,当有更新时,你不得不在每个父项目中都做修改,这是不太可行的。(因为)这会导致父项目中的冗余和数据不一致,也会使更新和维护子项目变得很困难。
|
||||
|
||||
### Git 子模块和子仓库
|
||||
|
||||
如果你可以用一条命令把一个项目放进另一个项目中,会怎样呢?如果你随时可以把一个项目作为子项目添加到任意数目的项目中,并可以同步更新修改呢?Git 提供了这类问题的解决方案:Git 子模块和 Git 子仓库。创建这些工具的目的是以更加模块化的水平来支持共用代码的开发工作流,(创建者)意在消除 Git 仓库<ruby>源码管理<rt>source-code management</rt></ruby>与它下面的子仓库间的障碍。
|
||||
|
||||
![Cherry tree growing on a mulberry tree][2]
|
||||
|
||||
生长在桑树上的樱桃树
|
||||
|
||||
下面是本文要详细介绍的概念的一个真实应用场景。如果你已经很熟悉 tree,这个模型看起来是下面这样:
|
||||
|
||||
![Tree with subtrees][3]
|
||||
|
||||
CC BY-SA opensource.com
|
||||
|
||||
### Git 子模块是什么?
|
||||
|
||||
Git 在它默认的包中提供了子模块,子模块可以把 Git 仓库嵌入到其他仓库中。确切地说,Git 子模块是指向子仓库中的某次提交。下面是我 [Docs-test][4] GitHub 仓库中的 Git 子模块:
|
||||
|
||||
![Git submodules screenshot][5]
|
||||
|
||||
**[文件夹@提交 Id][6]** 格式表明这个仓库是一个子模块,你可以直接点击文件夹进入该子仓库。名为 **.gitmodules** 的配置文件包含所有子模块仓库的详细信息。我的仓库的 **.gitmodules** 文件如下:
|
||||
|
||||
![Screenshot of .gitmodules file][7]
|
||||
|
||||
你可以用下面的命令在你的仓库中使用 Git 子模块:
|
||||
|
||||
#### 克隆一个仓库并加载子模块
|
||||
|
||||
克隆一个含有子模块的仓库:
|
||||
|
||||
|
||||
```
|
||||
`$ git clone --recursive <URL to Git repo>`
|
||||
```
|
||||
|
||||
如果你之前已经克隆了仓库,现在想加载它的子模块:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --init`
|
||||
```
|
||||
|
||||
如果有嵌套的子模块:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --init --recursive`
|
||||
```
|
||||
|
||||
#### 下载子模块
|
||||
|
||||
串行地连续下载多个子模块是很枯燥的工作,所以 **clone** 和 **submodule update** 会支持 **\--jobs** 或 **-j** 参数:
|
||||
|
||||
例如,想一次下载 8 个子模块,使用:
|
||||
|
||||
|
||||
```
|
||||
$ git submodule update --init --recursive -j 8
|
||||
$ git clone --recursive --jobs 8 <URL to Git repo>
|
||||
```
|
||||
|
||||
#### 拉取子模块
|
||||
|
||||
在运行或构建父项目之前,你需要确保依赖的子项目都是最新的。
|
||||
|
||||
拉取子模块的所有修改:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --remote`
|
||||
```
|
||||
|
||||
#### 使用 submodule 创建仓库:
|
||||
|
||||
向一个父仓库添加子仓库:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule add <URL to Git repo>`
|
||||
```
|
||||
|
||||
初始化一个已存在的 Git 子模块:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule init`
|
||||
```
|
||||
|
||||
你也可以通过为 **submodule update** 命令添加 **\--update** 参数在子模块中创建分支和追踪提交:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update --remote`
|
||||
```
|
||||
|
||||
#### 更新子模块提交
|
||||
|
||||
上面提到过,一个子模块就是一个指向子仓库中某次提交的链接。如果你想更新子模块的提交,不要担心。你不需要显式地指定最新的提交。你只需要使用通用的 **submodule update** 命令:
|
||||
|
||||
|
||||
```
|
||||
`$ git submodule update`
|
||||
```
|
||||
|
||||
就像你平时创建父仓库和把父仓库推送到 GitHub 那样添加和提交就可以了。
|
||||
|
||||
#### 从一个父仓库中删除一个子模块
|
||||
|
||||
仅仅手动删除一个子项目文件夹不会从父项目中移除这个子项目。想要删除名为 **childmodule** 的子模块,使用:
|
||||
|
||||
|
||||
```
|
||||
`$ git rm -f childmodule`
|
||||
```
|
||||
|
||||
虽然 Git 子模块看起来很容易上手,但是对于初学者来说,有一定的使用门槛。
|
||||
|
||||
### Git 子仓库是什么?
|
||||
|
||||
Git 子仓库,是在 Git 1.7.11 引入的,让你可以把任何仓库的副本作为子目录嵌入另一个仓库中。它是 Git 项目可以注入和管理项目依赖的几种方法之一。它在常规的提交中保存了外部依赖信息。Git 子仓库提供了整洁的集成点,因此很容易复原它们。
|
||||
|
||||
如果你参考 [GitHub 提供的子仓库教程][8] 来使用子仓库,那么无论你什么时候添加子仓库,在本地都不会看到 **.gittrees** 配置文件。这让我们很难分辨哪个是子仓库,因为它们看起来很像普通的文件夹,但是它们却是子仓库的副本。默认的 Git 包中不提供带 **.gittrees** 配置文件的 Git 子仓库版本,因此如果你想要带 **.gittrees** 配置文件的 git-subtree,必须从 Git 源码仓库的 [**/contrib/subtree** 文件夹][9] 下载 git-subtree。
|
||||
|
||||
你可以像克隆其他常规的仓库那样克隆任何含有子仓库的仓库,但由于在父仓库中有整个子仓库的副本,因此克隆过程可能会持续很长时间。
|
||||
|
||||
你可以用下面的命令在你的仓库中使用 Git 子仓库。
|
||||
|
||||
#### 向父仓库中添加一个子仓库
|
||||
|
||||
想要向父仓库中添加一个子仓库,首先你需要执行 **remote add**,之后执行 **subtree add** 命令:
|
||||
|
||||
|
||||
```
|
||||
$ git remote add remote-name <URL to Git repo>
|
||||
$ git subtree add --prefix=folder/ remote-name <URL to Git repo> subtree-branchname
|
||||
```
|
||||
|
||||
上面的命令会把整个子项目的提交历史合并到父仓库。
|
||||
|
||||
#### 向子仓库推送修改以及从子仓库拉取修改
|
||||
|
||||
|
||||
```
|
||||
`$ git subtree push-all`
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
|
||||
```
|
||||
`$ git subtree pull-all`
|
||||
```
|
||||
|
||||
### 你应该使用哪个?
|
||||
|
||||
任何工具都有优缺点。下面是一些可能会帮助你决定哪种最适合你的特性:
|
||||
|
||||
* Git 子模块的仓库占用空间更小,因为它们只是指向子项目的某次提交的链接,而 Git 子仓库保存了整个子项目及其提交历史。
|
||||
* Git 子模块需要在服务器中可访问,但子仓库是去中心化的。
|
||||
* Git 子模块大量用于基于组件的开发,而 Git 子仓库多用于基于系统的开发。
|
||||
|
||||
Git 子仓库并不是 Git 子模块的直接可替代项。有明确的说明来指导我们该使用哪种。如果有一个归属于你的外部仓库,使用场景是向它回推代码,那么就使用 Git 子模块,因为推送代码更容易。如果你有第三方代码,且不会向它推送代码,那么使用 Git 子仓库,因为拉取代码更容易。
|
||||
|
||||
自己尝试使用 Git 子仓库和子模块,然后在评论中留下你的使用感想。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/git-submodules-subtrees
|
||||
|
||||
作者:[Manaswini Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lxbwolf](https://github.com/lxbwolf)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/manaswinidas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/640px-bialbero_di_casorzo.jpg (Cherry tree growing on a mulberry tree)
|
||||
[3]: https://opensource.com/sites/default/files/subtree_0.png (Tree with subtrees)
|
||||
[4]: https://github.com/manaswinidas/Docs-test/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/git-submodules_github.png (Git submodules screenshot)
|
||||
[6]: mailto:folder@commitId
|
||||
[7]: https://opensource.com/sites/default/files/uploads/gitmodules.png (Screenshot of .gitmodules file)
|
||||
[8]: https://help.github.com/en/github/using-git/about-git-subtree-merges
|
||||
[9]: https://github.com/git/git/tree/master/contrib/subtree
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-studio-opts-for-kde/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Ubuntu Studio 将用 KDE Plasma 桌面环境替换 Xfce
|
||||
======
|
||||
|
||||
[Ubuntu Studio][1] 是一个流行的 [Ubuntu 官方变种][2],它是为从事音频制作、视频、图形、摄影和书籍出版的创意内容创建者量身定制的。它提供了许多拥有良好体验的开箱即用的多媒体内容创建应用。
|
||||
|
||||
在最近的 20.04 LTS 版本发布之后,Ubuntu Studio 团队在其[官方公告][3]中强调了一些非常重要的内容。而且,也许不是每个人都注意到关键信息,即 Ubuntu Studio 的未来。
|
||||
|
||||
Ubuntu Studio 20.04 将是带有 [Xfce 桌面环境][4]的最后一个版本。将来的所有发行版都将改用 [KDE Plasma][5]。
|
||||
|
||||
### 为什么 Ubuntu Studio 放弃 XFCE?
|
||||
|
||||
![][6]
|
||||
|
||||
据他们的澄清,Ubuntu Studio 并不致力于任何特定的外观,而是致力于提供最佳的用户体验。同时,KDE 被证明是一个更好的选择。
|
||||
|
||||
> Plasma 已被证明为图形艺术家和摄影师提供了更好的工具,例如在 Gwenview、Krita 甚至文件管理器 Dolphin 中都可以看到。此外,它对 Wacom 平板的支持比其他任何桌面环境都更好。
|
||||
|
||||
> 它已经变得不错,以至于大多数 Ubuntu Studio 团队现在都使用 Kubuntu,并通过 Ubuntu Studio Installer 将 Ubuntu Studio 作为日常附加驱动使用。由于我们中的许多人都在使用 Plasma,因此在我们的下一个版本中过渡到 Plasma 的时机似乎是正确的。
|
||||
|
||||
当然,每个桌面环境都针对不同的内容进行了量身定制。他们认为 KDE Plasma 将是取代 XFCE 的最适合的桌面环境,从而为所有用户提供更好的用户体验。
|
||||
|
||||
尽管我不确定用户对此会有何反应,因为每个用户都有不同的偏好。如果现有用户对 KDE 没有问题,那就没什么大不了的。
|
||||
|
||||
值得注意的是,Ubuntu Studio 还提到了为什么 KDE 可能是它们的更好选择:
|
||||
|
||||
>在没有 Akonadi 的情况下,Plasma 桌面环境的资源使用与 Xfce 一样轻,甚至更轻。Fedora Jam 和 KXStudio 等其他以音频为重点的 Linux 发行版在历史上一直使用 KDE Plasma 桌面环境,并且在音频方面做得很好。
|
||||
|
||||
此外,他们还强调了[福布斯杂志中 Jason Evangelho 的文章][7],其中一些基准测试表明 KDE 几乎与 Xfce 一样轻。即使这是一个好征兆,我们仍然必须等待用户测试 KDE 驱动的 Ubuntu Studio。只有这样,我们才能观察 Ubuntu Studio 放弃 XFCE 桌面环境的决定是否正确。
|
||||
|
||||
### 更改后,Ubuntu Studio 用户将发生什么变化?
|
||||
|
||||
在带 KDE 的 Ubuntu Studio 20.10 及更高版本上可能会影响(或改进)整个工作流程。
|
||||
|
||||
但是,升级过程(从 20.04 到 20.10)将导致系统损坏。因此,全新安装 Ubuntu Studio 20.10 或更高版本将是唯一的方法。
|
||||
|
||||
他们还提到,他们将不断评估与预安装的应用是否存在重复。因此,我相信在接下来的几天中将会有更多细节。
|
||||
|
||||
Ubuntu Studio 是最近第二个切换它主要桌面环境的发行版。先前,[Lubuntu] [8] 从 LXDE 切换到 LXQt。
|
||||
|
||||
你如何看待这种变化?欢迎在下面的评论中分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-studio-opts-for-kde/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://ubuntustudio.org/
|
||||
[2]: https://itsfoss.com/which-ubuntu-install/
|
||||
[3]: https://ubuntustudio.org/2020/04/ubuntu-studio-20-04-lts-released/
|
||||
[4]: https://xfce.org
|
||||
[5]: https://kde.org/plasma-desktop
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-studio-kde-xfce.jpg?ssl=1
|
||||
[7]: https://www.forbes.com/sites/jasonevangelho/2019/10/23/bold-prediction-kde-will-steal-the-lightweight-linux-desktop-crown-in-2020
|
||||
[8]: https://itsfoss.com/lubuntu-20-04-review/
|
Loading…
Reference in New Issue
Block a user