Merge branch 'LCTT:master' into master

This commit is contained in:
Xiaoyan Zhang 2023-11-09 23:47:45 +08:00 committed by GitHub
commit 919b703223
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
96 changed files with 6881 additions and 2133 deletions

View File

@ -2,26 +2,29 @@
[#]: via: "https://opensource.com/article/22/8/interpret-compile-java"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: translator: "toknow-gh"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16353-1.html"
A guide to JVM interpretation and compilation
JVM 解释和编译指南
======
Use interpretation, just-in-time compilation, and ahead-of-time compilation efficiently by understanding the differences among them.
Java is a platform-independent language. Programs are converted to *bytecode* after compilation. This bytecode gets converted to *machine code* at runtime. An interpreter emulates the execution of bytecode instructions for the abstract machine on a specific physical machine. Just-in-time (JIT) compilation happens at some point during execution, and ahead-of-time (AOT) compilation happens during build time.
![][0]
This article explains when an interpreter comes into play and when JIT and AOT will occur. I also discuss the trade-offs between JIT and AOT.
> 通过理解解释、即时编译和预先编译之间的区别,有效地使用它们。
### Source code, bytecode, machine code
Java 是一种跨平台的编程语言。程序源代码会被编译为 <ruby>字节码<rt>bytecode</rt></ruby>,然后字节码在运行时被转换为 <ruby>机器码<rt>machine code</rt></ruby><ruby>解释器<rt>interpreter</rt></ruby> 在物理机器上模拟出的抽象计算机上执行字节码指令。<ruby>即时<rt>just-in-time</rt></ruby>JIT编译发生在运行期<ruby>预先<rt>ahead-of-time</rt></ruby>AOT编译发生在构建期。
Applications are generally written using a programming language like C, C++, or Java. The set of instructions written using high-level programming languages is called source code. Source code is human readable. To execute it on the target machine, source code needs to be converted to machine code, which is machine readable. Source code is typically converted into machine code by a compiler.
本文将说明解释器、JIT 和 AOT 分别何时起作用,以及如何在 JIT 和 AOT 之间权衡。
In Java, however, the source code is first converted into an intermediate form called *bytecode*. This bytecode is platform independent, which is why Java is well known as a platform-independent programming language. The primary Java compiler `javac` converts the Java source code into bytecode. Then, the bytecode is interpreted by the interpreter.
### 源代码、字节码、机器码
Here is a small `Hello.java` program:
应用程序通常是由 C、C++ 或 Java 等编程语言编写。用这些高级编程语言编写的指令集合称为源代码。源代码是人类可读的。要在目标机器上执行它,需要将源代码转换为机器可读的机器码。这个转换工作通常是由 <ruby>编译器<rt>compiler</rt></ruby> 来完成的。
然而,在 Java 中,源代码首先被转换为一种中间形式,称为字节码。字节码是平台无关的,所以 Java 被称为平台无关编程语言。Java 编译器 `javac` 将源代码转换为字节码。然后解释器解释执行字节码。
下面是一个简单的 Java 程序, `Hello.java`
```
//Hello.java
@ -32,7 +35,7 @@ public class Hello {
}
```
Compile it using `javac` to generate a `Hello.class` file containing the bytecode.
使用 `javac` 编译它,生成包含字节码的 `Hello.class` 文件。
```
$ javac Hello.java
@ -40,7 +43,7 @@ $ ls
Hello.class  Hello.java
```
Now, use `javap` to disassemble the content of the `Hello.class` file. The output of `javap` depends on the options used. If you don't choose any options, it prints basic information, including which source file this class file is compiled from, the package name, public and protected fields, and methods of the class.
现在,使用 `javap` 来反汇编 `Hello.class` 文件的内容。使用 `javap` 时如果不指定任何选项,它将打印基本信息,包括编译这个 `.class` 文件的源文件、包名称、公共和受保护字段以及类的方法。
```
$ javap Hello.class
@ -51,7 +54,7 @@ public class Hello {
}
```
To see the bytecode content in the `.class` file, use the `-c` option:
要查看 `.class` 文件中的字节码内容,使用 `-c` 选项:
```
$ javap -c Hello.class
@ -73,15 +76,15 @@ java/io/PrintStream.println:(Ljava/lang/String;)V
}
```
To get more detailed information, use the `-v` option:
要获取更详细的信息,使用 `-v` 选项:
```
$ javap -v Hello.class
```
### Interpreter, JIT, AOT
### 解释器JIT 和 AOT
The interpreter is responsible for emulating the execution of bytecode instructions for the abstract machine on a specific physical machine. When compiling source code using `javac` and executing using the `java` command, the interpreter operates during runtime and serves its purpose.
解释器负责在物理机器上模拟出的抽象计算机上执行字节码指令。当使用 `javac` 编译源代码,然后使用 `java` 执行时,解释器在程序运行时运行并完成它的目标。
```
$ javac Hello.java
@ -89,11 +92,11 @@ $ java Hello
Inside Hello World!
```
The JIT compiler also operates at runtime. When the interpreter interprets a Java program, another component, called a runtime profiler, is silently monitoring the program's execution to observe which portion of the code is getting interpreted and how many times. These statistics help detect the *hotspots* of the program, that is, those portions of code frequently being interpreted. Once they're interpreted above a set threshold, they are eligible to be converted into machine code directly by the JIT compiler. The JIT compiler is also known as a profile-guided compiler. Conversion of bytecode to native code happens on the fly, hence the name just-in-time. JIT reduces overhead of the interpreter emulating the same set of instructions to machine code.
JIT 编译器也在运行期发挥作用。当解释器解释 Java 程序时,另一个称为运行时 <ruby>分析器<rt>profiler</rt></ruby> 的组件将静默地监视程序的执行,统计各部分代码被解释的次数。基于这些统计信息可以检测出程序的 <ruby>热点<rt>hotspot</rt></ruby>,即那些经常被解释的代码。一旦代码被解释次数超过设定的阈值,它们满足被 JIT 编译器直接转换为机器码的条件。所以 JIT 编译器也被称为分析优化的编译器。从字节码到机器码的转换是在程序运行过程中进行的因此称为即时编译。JIT 减少了解释器将同一组指令模拟为机器码的负担。
The AOT compiler compiles code during build time. Generating frequently interpreted and JIT-compiled code at build time improves the warm-up time of the Java Virtual Machine (JVM). This compiler was introduced in Java 9 as an experimental feature. The `jaotc` tool uses the Graal compiler, which is itself written in Java, for AOT compilation.
AOT 编译器在构建期编译代码。在构建时将需要频繁解释和 JIT 编译的代码直接编译为机器码可以缩短 <ruby>Java 虚拟机<rt>Java Virtual Machine</rt></ruby>JVM<ruby>预热<rt>warm-up</rt></ruby>时间。LCTT 译注Java 程序启动后首先字节码被解释执行此时执行效率较低。等到程序运行了足够的时间后代码热点被检测出来JIT 开始发挥作用程序运行效率提升。JIT 发挥作用之前的过程就是预热。AOT 是在 Java 9 中引入的一个实验性特性。`jaotc` 使用 Graal 编译器(它本身也是用 Java 编写的)来实现 AOT 编译。
Here's a sample use case for a Hello program:
`Hello.java` 为例:
```
//Hello.java
@ -110,9 +113,9 @@ $ java -XX:+UnlockExperimentalVMOptions -XX:AOTLibrary=./libHello.so Hello
Inside Hello World!
```
### When do interpreting and compiling come into play: an example
### 解释和编译发生的时机
This example illustrates when Java uses an interpreter and when JIT and AOT pitch in. Consider a simple Java program, `Demo.java` :
下面通过例子来展示 Java 在什么时候使用解释器,以及 JIT 和 AOT 何时参与进来。这里有一个简单的程序 `Demo.java` :
```
//Demo.java
@ -136,7 +139,7 @@ public class Demo {
}
```
This simple program has a `main` method that creates a `Demo` object instance, and calls the method `square`, which displays the square root of the `for` loop iteration value. Now, compile and run the code:
在这个程序的 `main()` 方法中创建了一个 `Demo` 对象的实例,并调用该实例的 `square()`方法,然后显示 `for` 循环迭代变量的平方值。编译并运行它:
```
$ javac Demo.java
@ -159,7 +162,7 @@ Time taken= 66498
--------------------------------
```
The question now is whether the output is a result of the interpreter, JIT, or AOT. In this case, it's wholly interpreted. How did I conclude that? Well, to get JIT to contribute to the compilation, the hotspots of the code must be interpreted above a defined threshold. Then and only then are those pieces of code queued for JIT compilation. To find the threshold for JDK 11:
上面的结果是由谁产生的呢是解释器JIT 还是 AOT在目前的情况下它完全是通过解释产生的。我是怎么得出这个结论的呢只有代码被解释的次数必须超过某个阈值时这些热点代码片段才会被加入 JIT 编译队列。只有这时JIT 编译才会发挥作用。使用以下命令查看 JDK 11 中的该阈值:
```
$ java -XX:+PrintFlagsFinal -version | grep CompileThreshold
@ -170,9 +173,9 @@ OpenJDK Runtime Environment 18.9 (build 11.0.13+8)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.13+8, mixed mode, sharing)
```
The above output demonstrates that a particular piece of code should be interpreted 10,000 times to be eligible for JIT compilation. Can this threshold be manually tuned, and is there some JVM flag that indicates whether a method is JIT compiled? Yes, there are multiple options to serve this purpose.
上面的输出表明,一段代码被解释 10,000 次才符合 JIT 编译的条件。这个阈值是否可以手动调整呢?是否有 JVM 标志可以指示出方法是否被 JIT 编译了呢?答案是肯定的,而且有多种方式可以达到这个目的。
One option for learning whether a method is JIT compiled is `-XX:+PrintCompilation`. Along with this option, the flag `-Xbatch` provides the output in a more readable way. If both interpretation and JIT are happening in parallel, the `-Xbatch` flag helps distinguish the output of both. Use these flags as follows:
使用 `-XX:+PrintCompilation` 选项可以查看一个方法是否被 JIT 编译。除此之外,使用 `-Xbatch` 标志可以提高输出的可读性。如果解释和 JIT 同时发生,`-Xbatch` 可以帮助区分两者的输出。使用这些标志如下:
```
$ java -Xbatch  -XX:+PrintCompilation  Demo
@ -190,13 +193,13 @@ Time taken= 50150
--------------------------------
```
The output of the above command is too lengthy, so I've truncated the middle portion. Note that along with the Demo program code, the JDKs internal class functions are also getting compiled. This is why the output is so lengthy. Because my focus is `Demo.java` code, I'll use an option that can minimize the output by excluding the internal package functions. The command -`XX:CompileCommandFile` disables JIT for internal classes:
注意,上面命令的实际输出太长了,这里我只是截取了一部分。输出很长的原因是除了 `Demo` 程序的代码外JDK 内部类的函数也被编译了。由于我的重点是 `Demo.java` 代码,我希望排除内部包的函数来简化输出。通过选项 `-XX:CompileCommandFile` 可以禁用内部类的 JIT
```
$ java -Xbatch -XX:+PrintCompilation -XX:CompileCommandFile=hotspot_compiler Demo
```
The file `hotspot_compiler` referenced by `-XX:CompileCommandFile` contains this code to exclude specific packages:
在选项 `-XX:CompileCommandFile` 指定的文件 `hotspot_compiler` 中包含了要排除的包:
```
$ cat hotspot_compiler
@ -206,7 +209,7 @@ exclude jdk/* *
exclude sun/* *
```
In the first line, `quiet` instructs the JVM not to write anything about excluded classes. To tune the JIT threshold, use `-XX:CompileThreshold` with the value set to 5, meaning that after interpreting five times, it's time for JIT:
第一行的 `quiet` 告诉 JVM 不要输出任何关于被排除类的内容。用 `-XX:CompileThreshold` 将 JIT 阈值设置为 5。这意味着在解释 5 次之后,就会进行 JIT 编译:
```
$ java -Xbatch -XX:+PrintCompilation -XX:CompileCommandFile=hotspot_compiler \
@ -246,7 +249,7 @@ Time taken= 26492
--------------------------------
```
The output is still not different from interpreted output! This is because, as per Oracle's documentation, the `-XX:CompileThreshold` flag is effective only when `TieredCompilation` is disabled:
好像输出结果跟只用解释时并没有什么区别。根据 Oracle 的文档,这是因为只有禁用 `TieredCompilation``-XX:CompileThreshold` 才会生效:
```
$ java -Xbatch -XX:+PrintCompilation -XX:CompileCommandFile=hotspot_compiler \
@ -282,7 +285,7 @@ Square(i) = 100
Time taken= 52393
```
This section of code is now JIT compiled after the fifth interpretation:
可以看到在第五次迭代之后,代码片段被 JIT 编译了:
```
--------------------------------
@ -294,11 +297,11 @@ Time taken= 983002
--------------------------------
```
Along with the `square()` method, the constructor is also getting JIT compiled because there is a Demo instance inside the `for` loop before calling `square()`. Hence, it will also reach the threshold and be JIT compiled. This example illustrates when JIT comes into play after interpretation.
可以看到,与 `square()` 方法一起,构造方法也被 JIT 编译了。在 `for` 循环中调用 `square()` 之前要先构造 `Demo` 实例,所以构造方法的解释次数同样达到 JIT 编译阈值。这个例子说明了在解释发生之后何时 JIT 会介入。
To see the compiled version of the code, use the `-XX:+PrintAssembly flag`, which works only if there is a disassembler in the library path. For OpenJDK, use the `hsdis` disassembler. Download a suitable disassembler library— in this case, `hsdis-amd64.so` — and place it under `Java_HOME/lib/server`. Make sure to use `-XX:+UnlockDiagnosticVMOptions` before `-XX:+PrintAssembly`. Otherwise, JVM will give you a warning.
要查看编译后的代码,需要使用 `-XX:+PrintAssembly` 标志,该标志仅在库路径中有反汇编器时才起作用。对于 OpenJDK使用 `hsdis` 作为反汇编器。下载合适版本的反汇编程序库,在本例中是 `hsdis-amd64.so`,并将其放在 `Java_HOME/lib/server` 目录下。使用时还需要在 `-XX:+PrintAssembly` 之前增加 `-XX:+UnlockDiagnosticVMOptions` 选项。否则JVM 会给你一个警告。
The entire command is as follows:
完整命令如下:
```
$ java -Xbatch -XX:+PrintCompilation -XX:CompileCommandFile=hotspot_compiler \ -XX:-TieredCompilation -XX:CompileThreshold=5 -XX:+UnlockDiagnosticVMOptions \ -XX:+PrintAssembly Demo
@ -354,23 +357,23 @@ Square(i) = 100
Time taken= 52888
```
The output is lengthy, so I've included only the output related to `Demo.java`.
我只截取了输出中与 `Demo.java` 相关的部分。
Now it's time for AOT compilation. This option was introduced in JDK9. AOT is a static compiler to generate the `.so` library. With AOT, the interested classes can be compiled to create an `.so` library that can be directly executed instead of interpreting or JIT compiling. If JVM doesn't find any AOT-compiled code, the usual interpretation and JIT compilation takes place.
现在再来看看 AOT 编译。它是在 JDK9 中引入的特性。AOT 是用于生成 `.so` 这样的库文件的静态编译器。用 AOT 可以将指定的类编译成 `.so` 库。这个库可以直接执行,而不用解释或 JIT 编译。如果 JVM 没有检测到 AOT 编译的代码,它会进行常规的解释和 JIT 编译。
The command used for AOT compilation is as follows:
使用 AOT 编译的命令如下:
```
$ jaotc --output=libDemo.so Demo.class
```
To see the symbols in the shared library, use the following:
用下面的命令来查看共享库的符号表:
```
$ nm libDemo.so
```
To use the generated `.so` library, use `-XX:AOTLibrary` along with `-XX:+UnlockExperimentalVMOptions` as follows:
要使用生成的 `.so` 库,使用 `-XX:+UnlockExperimentalVMOptions``-XX:AOTLibrary`
```
$ java -XX:+UnlockExperimentalVMOptions -XX:AOTLibrary=./libDemo.so Demo
@ -387,7 +390,7 @@ Square(i) = 100
Time taken= 42085
```
This output looks as if it is an interpreted version itself. To make sure that the AOT compiled code is utilized, use `-XX:+PrintAOT` :
从输出上看,跟完全用解释的情况没有区别。为了确认 AOT 发挥了作用,使用 `-XX:+PrintAOT`
```
$ java -XX:+UnlockExperimentalVMOptions -XX:AOTLibrary=./libDemo.so -XX:+PrintAOT Demo
@ -408,7 +411,7 @@ Square(i) = 100
Time taken= 53586
```
Just to make sure that JIT compilation hasn't happened, use the following:
要确认没有发生 JIT 编译,用如下命令:
```
$ java -XX:+UnlockExperimentalVMOptions -Xbatch -XX:+PrintCompilation \ -XX:CompileCommandFile=hotspot_compiler -XX:-TieredCompilation \ -XX:CompileThreshold=3 -XX:AOTLibrary=./libDemo.so -XX:+PrintAOT Demo
@ -427,7 +430,7 @@ Square(i) = 100
Time taken= 59554
```
If any small change is made to the source code subjected to AOT, it's important to ensure that the corresponding `.so` is created again. Otherwise, the stale AOT-compiled `.so` won't have any effect. For example, make a small change to the square function such that now it's calculating cube:
需要特别注意的是,修改被 AOT 编译了的源代码后,一定要重新生成 `.so` 库文件。否则,过时的的 AOT 编译库文件不会起作用。例如,修改 `square()` 方法,使其计算立方值:
```
//Demo.java
@ -451,13 +454,13 @@ public class Demo {
}
```
Now, compile `Demo.java` again:
重新编译 `Demo.java`
```
$ java Demo.java
```
But, don't create `libDemo.so` using `jaotc`. Instead, use this command:
但不重新生成 `libDemo.so`。使用下面命令运行 `Demo`
```
$ java -XX:+UnlockExperimentalVMOptions -Xbatch -XX:+PrintCompilation -XX:CompileCommandFile=hotspot_compiler -XX:-TieredCompilation -XX:CompileThreshold=3 -XX:AOTLibrary=./libDemo.so -XX:+PrintAOT Demo
@ -482,11 +485,13 @@ sqrt(i) = 1000
Time taken= 47132
```
Though the old version of `libDemo.so` is loaded, JVM detected it as a stale one. Every time a `.class` file is created, a fingerprint goes into the class file, and a class fingerprint is kept in the AOT library. Because the class fingerprint is different from the one in the AOT library, AOT-compiled native code is not used. Instead, the method is now JIT compiled, because the `-XX:CompileThreshold` is set to 3.
可以看到,虽然旧版本的 `libDemo.so` 被加载了,但 JVM 检测出它已经过时了。每次生成 `.class` 文件时,都会在类文件中添加一个指纹,并在 AOT 库中保存该指纹。修改源代码后类指纹与旧的 AOT 库中的指纹不匹配了,所以没有执行 AOT 编译生成的原生机器码。从输出可以看出,现在实际上是 JIT 在起作用(注意 `-XX:CompileThreshold` 被设置为了 3
### AOT or JIT?
### AOT 和 JIT 之间的权衡
If you are aiming to reduce the warm-up time of the JVM, use AOT, which reduces the burden during runtime. The catch is that AOT will not have enough data to decide which piece of code needs to be precompiled to native code.  By contrast, JIT pitches in during runtime and impacts the warm-up time. However, it will have enough profiling data to compile and decompile the code more efficiently.
如果你的目标是减少 JVM 的预热时间,请使用 AOT这可以减少运行时负担。问题是 AOT 没有足够的数据来决定哪段代码需要预编译为原生代码。相比之下JIT 在运行时起作用,却对预热时间有一定的影响。然而,它将有足够的分析数据来更高效地编译和反编译代码。
*题图MJ/ed3e6e15-56c7-4c1d-aff1-84a225faeeeb*
--------------------------------------------------------------------------------
@ -494,8 +499,8 @@ via: https://opensource.com/article/22/8/interpret-compile-java
作者:[Jayashree Huttanagoudar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[toknow-gh](https://github.com/toknow-gh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -504,3 +509,4 @@ via: https://opensource.com/article/22/8/interpret-compile-java
[1]: https://opensource.com/sites/default/files/lead-images/studying-books-java-couch-education.png
[2]: https://www.wocintechchat.com/
[3]: https://creativecommons.org/licenses/by/2.0/
[0]: https://img.linux.net.cn/data/attachment/album/202311/06/093552kheiob71meqierhd.png

View File

@ -0,0 +1,121 @@
[#]: subject: "How ODT files are structured"
[#]: via: "https://opensource.com/article/22/8/odt-files"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: "toknow-gh"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16340-1.html"
开放文档格式 ODT 文件格式解析
======
![][0]
> 开放文档格式ODF基于开放标准你可以使用其它工具检查它们甚至从中提取数据。你只需要知道从哪里开始。
过去,文字处理文件是封闭的专有格式。在一些较旧的文字处理软件中,文档文件本质上是该软件的内存转储。虽然这样可以让加载文件更快,但也使文档文件格式变得不透明。
2005 年左右,<ruby>结构化信息标准促进组织<rt> Organization for the Advancement of Structured Information Standards</rt></ruby>OASIS为所有类型的办公文档定义了一种开放格式<ruby>办公应用程序开放文档格式<rt>Open Document Format for Office Applications</rt></ruby>ODF。由于 ODF 是基于 [OpenOffice.org][4] 的 XML 文件规范的开放式标准,因此你也可以将其简称为 “开放文档格式”。ODF 包括几种文件类型,包括用于 <ruby>开放文档文本OpenDocument Text</rt></ruby> 文档的 ODT。ODT 文件中有很多值得探索的内容,它的本质是一个 Zip 文件。
### ODT 文件结构
跟所有 ODF 文件一样ODT 文件实际上是一个 XML 文档和其它文件的 Zip 压缩包。使用 Zip 可以占用更少的磁盘空间,同时也意味着可以用标准 Zip 工具来检查它。
我有一篇关于 IT 领导力的文章名为“Nibbled to death by ducks”我将其保存为 ODT 文件。由于 ODF 文件是一个 zip 容器,你可以用 `unzip` 命令来检查它:
```
$ unzip -l 'Nibbled to death by ducks.odt'
Archive: Nibbled to death by ducks.odt
Length Date Time Name
39 07-15-2022 22:18 mimetype
12713 07-15-2022 22:18 Thumbnails/thumbnail.png
915001 07-15-2022 22:18 Pictures/10000201000004500000026DBF6636B0B9352031.png
10879 07-15-2022 22:18 content.xml
20048 07-15-2022 22:18 styles.xml
9576 07-15-2022 22:18 settings.xml
757 07-15-2022 22:18 meta.xml
260 07-15-2022 22:18 manifest.rdf
0 07-15-2022 22:18 Configurations2/accelerator/
0 07-15-2022 22:18 Configurations2/toolpanel/
0 07-15-2022 22:18 Configurations2/statusbar/
0 07-15-2022 22:18 Configurations2/progressbar/
0 07-15-2022 22:18 Configurations2/toolbar/
0 07-15-2022 22:18 Configurations2/popupmenu/
0 07-15-2022 22:18 Configurations2/floater/
0 07-15-2022 22:18 Configurations2/menubar/
1192 07-15-2022 22:18 META-INF/manifest.xml
970465 17 files
```
我想强调 Zip 文件结构的以下几个元素:
1. `mimetype` 文件用于定义 ODF 文档。处理 ODT 文件的程序,如文字处理程序,可以使用该文件来验证文档的 MIME 类型。对于 ODT 文件,它应该总是:
```
application/vnd.oasis.opendocument.text
```
2. `META-INF` 目录中有一个 `manifest.xml` 文件。它包含查找 ODT 文件其它组件的所有信息。任何读取 ODT 文件的程序都从这个文件开始定位其它内容。例如,我的 ODT 文档的 `manifest.xml` 文件包含这一行,它定义了在哪里可以找到主要内容:
```
<manifest:file-entry manifest:full-path="content.xml" manifest:media-type="text/xml"/>
```
3. `content.xml` 文件包含文档的实际内容。
4. 我的文档中只有一张截图,它位于 `Pictures` 目录中。
### 从 ODT 中提取文件
由于 ODT 文档是一个具有特定结构的 Zip 文件,因此可以从中提取文件。你可以先解压缩整个 ODT 文件,例如使用 `unzip` 命令:
```
$ unzip -q 'Nibbled to death by ducks.odt' -d Nibbled
```
一位同事最近向我要了一份我在文章中提到的图片。通过查看 `META-INF/manifest.xml` 文件,我找到了嵌入图像的确切位置。用 `grep` 命令可以找到描述图像的行:
```
$ cd Nibbled
$ grep image META-INF/manifest.xml
<manifest:file-entry manifest:full-path="Thumbnails/thumbnail.png" manifest:media-type="image/png"/>
<manifest:file-entry manifest:full-path="Pictures/10000201000004500000026DBF6636B0B9352031.png" manifest:media-type=" image/png/>
```
我要找的图像保存在 `Pictures` 文件夹中。可以通过列出目录的内容来验证:
```
$ ls -F
Configurations2/ manifest.rdf meta.xml Pictures/ styles.xml
content.xml META-INF/ mimetype settings.xml Thumbnails/
```
就是这张图片:
![Image of rubber ducks in two bowls][5]
### 开放文档格式
ODF 是一种开放的文件格式它可以描述文字处理文件ODT、电子表格文件ODS、演示文稿ODP和其它文件类型。由于 ODF 格式基于开放标准,因此可以使用其他工具检查它们,甚至从中提取数据。你只需要知道从哪里开始。所有 ODF 文件都以 `META-INF/manifest.xml` 为“引导”文件,通过它你能找到其余的所有内容。
*题图MJ/d245ab34-f0b0-452c-b29a-ece9aa78f11a*
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/8/odt-files
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[toknow-gh](https://github.com/toknow-gh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/coffee_tea_laptop_computer_work_desk.png
[2]: https://unsplash.com/@jonasleupe?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/tea-cup-computer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: http://OpenOffice.org
[5]: https://opensource.com/sites/default/files/2022-07/ducks.png
[0]: https://img.linux.net.cn/data/attachment/album/202311/01/223607j3nvjcuw5jcocbz3.jpg

View File

@ -0,0 +1,112 @@
[#]: subject: "Why Companies Need to Set Up an Open Source Program Office"
[#]: via: "https://www.opensourceforu.com/2022/08/why-companies-need-to-set-up-an-open-source-program-office/"
[#]: author: "Sakshi Sharma https://www.opensourceforu.com/author/sakshi-sharma/"
[#]: collector: "lkxed"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16341-1.html"
为什么公司需要设立开源项目办公室
====
![][0]
> 要想软件产品能够成功,关键在于管理开源软件的使用并降低合规风险。开源项目办公室能帮助组织实现这一目标。让我们一起深入了解。
<ruby>开源软件<rt>Open source software</rt></ruby>OSS是构建现代软件解决方案的重要组成部分。无论是服务于内部或是面向客户的解决方案如今的组织都在很大程度上依赖于开源软件。开源软件组件受其独立的授权条款约束对这些条款的不合规操作往往会使组织面临安全和知识产权IP风险这进而可能会损害公司的品牌价值。
当开发团队正忙于发布软件版本时,他们的主要目标是满足项目的截止日期。因此,他们在跟踪组件版本、库或者引入项目的第三方代码时,往往疏于应有的严格性。这意味着带有许可限制或漏洞的开源软件组件可能会进入代码库,然后交付给客户,这对客户和提供软件解决方案的公司都将带来风险。
开发人员为开源项目做出贡献的领域也日益具有挑战性。如果公司能够参与,他们可以获得多种益处,包括保持技能的最新性,挽留员工,吸引开发者为组织工作,以及提升公司形象。很多开源项目要求开发者签署贡献者许可协议,该协议声明由开发者创建的知识产权属于该项目,而非开发者本人。在这种情况下,组织需要确保那些不公开源代码的知识产权和商业机密不会被转让给开源项目。
我们需要教育开发者去了解开源许可的相关问题,确定何时、如何以及在何种程度上向社区提供支持,以及哪些软件包可能会给组织的声誉带来风险。通过制定一套战略性的政策和操作流程,我们可以规范这一切。实现上述目标的一种方式就是设立一个专门处理所有开源相关事务的部门——即 <ruby>开源项目办公室<rt>open source program office</rt></ruby>OSPO
OSPO 为员工使用开源软件创建了一个生态环境使合规风险得到良好的控制。OSPO 的角色不仅在于监督开源软件的使用,它还负责回馈社区,并通过积极参与各种活动以及组织网络研讨会和促销活动,来推动公司在市场上的增长。
在这篇文章里,我们将深入探讨为何公司需要设立一个 OSPO以及它是如何在开源政策和管理程序中崭露头角的。
### 为何我们需要一个开源项目办公室OSPO
由于开源软件正广泛地被运用,因此在产品开发周期中,为团队对其使用的监管和维持合规性策略往往会带来重大压力。
开发者往往会忽略许可证责任有时甚至管理层或各利益相关方也并未完全意识到不遵守这些开源许可证的影响。不论是用于内部还是外部目的OSPO 能处理从开始引入开源软件,直至交付给终端用户的过程中的所有环节。
通过在软件开发生命周期早期阶段开始进行合规性和规章制度的检查OSPO 能构筑坚实的基础。这通常开始于引导和整合团队成员共同迈向一个能惠及组织价值观的方向。OSPO 会设定关于开源使用的政策和流程,并在公司内部进行角色和职责的管理。
总结来说OSPO 有助于整合所有参与产品构建的相关团队的努力,进而提升组织更好和更有效使用开源的能力。
#### 开源项目办公室OSPO的崛起
诸如微软、谷歌和 Netflix 等公司已经在自身组织内部设立了成熟的 OSPO。此外像 Porsche 和 Spotify 这样的公司也在建立自己的 OSPO以实现开源的高效利用。
以下是一些知名公司的领导者对 OSPO 实践的看法:
* “对于公司来说这是一种文化的变迁”Jeff McAffer 解释了他的观点,他曾经多年负责微软的 OSPO并现在是 GitHub 的产品主管,致力于在企业界推动开源的发展。“很多公司并不习惯与外部团队合作。”
* “工程、业务、法律每一方的利益相关者都有他们各自的目标和角色,往往需要在速度、质量和风险之间做出权衡,” Spotify 的开源主管 Remy DeCausemaker 解释道。“ OSPO 的任务就是协调和连接这些单独的目标,融合成一个能够减少摩擦的全面策略。”
* Verizon Media 的 OSPO 领导 Gil Yahuda 表达了他的观点,“我们正在寻找创造一个人才愿意融入其中的工作环境。我们的工程师都知道,他们处在一个欢迎开源的环境中,他们在这里被鼓励与他们工作相关的开源社区合作。”
![图 12018-2021年各行业开源项目办公室的普及情况 来源https://github.com/todogroup/osposurvey/tree/master/2021][1]
### 开源项目办公室OSPO的职能
OSPO 的职能可能会根据组织的员工数量、OSPO 团队的人数以及开源的运用目的不同而有所差异。组织可能只想利用开源软件来开发产品,也可能同时计划向社区做出贡献。
OSPO 的角色可能会包括评估哪些开源许可证是适宜的以及是否应让全职员工参与开源项目等任务。为愿意贡献的开发人员制定贡献者许可协议CLA并确定哪些开源组件有助于产品的快速成长和质量提升也是 OSPO 的重要职责。
OSPO 的主要职能包括但不限于:
* 建立开源合规和治理政策来降低组织的知识产权风险
* 培育开发者做出更佳决策的能力
* 制定政策规范与公司全面采用开源的工作。
* 监控组织内外开源软件的使用情况
* 在每次软件版本发布后组织会议,讨论开源软件合规流程的优点及改进空间
* 加快软件开发生命周期SDLC
* 提高不同部门之间的透明度和协调性
* 通过简化流程在早期阶段降低风险
* 鼓励团队成员向上游贡献,以享受开源项目的协作和创新优势
* 提供包含合适补救措施和产品团队建议的报告
* 准备合规文档,确保满足许可证的义务
### 构建开源项目办公室OSPO的过程
OSPO 的组成通常包括公司内多个部门的人员。这个过程涉及了对相关部门进行开源合规基础和使用风险的培训与教育。OSPO 可能提供法律和技术支持,以确保达成开源的目标需求。
组织内的 OSPO 可能包括以下人员(这只是一个可能参与的人员名单,并不是详尽无遗的清单):
* 主任/首席:主任或首席通常是 OSPO 的主要负责人。他能全方位掌控使用开源的各个方面,包括使用不同组件的影响,许可证的含义,以及开发和社区贡献等。这些要求完全取决于公司的需求。
* 项目经理:项目经理为目标解决方案设置需求和目标。他/她将与产品和工程团队共同工作,以协调工作流程。这包括以开发者友好的方式确保策略和工具的实施。
* 法律支持:法律支持可能来自公司外部或者内部,但他们在 OSPO 中扮演着重要角色。法律团队将与项目经理密切合作,定义管理开源软件使用的策略,包括每个产品允许使用的开源许可证,如何(或是否)向现有的开源项目贡献等。
* 产品和工程团队/开发者:工程团队需要熟悉开源许可及其相关风险。团队在使用任何开源组件之前,必须得到 OSPO 的批准。团队可能需要定期接受关于开源合规基础以及其使用的培训。
* 首席技术官/信息官/利益相关者:公司的领导对 OSPO 策略有着巨大影响。利益相关者在任何产品解决方案的决策过程中拥有很大的决定权。因此,工程副总裁,首席技术官/信息官,或者首席合规/风险官员需要参与 OSPO 的工作。
* IT 团队:来自 IT 部门的支持十分重要。OSPO 可能被分配实施内部工具的任务如提高开发者效率监控开源合规或者设置开源安全措施等。IT 团队在协助连接工作流程和确保以开发者友好的方式实施策略方面起着关键作用。
在 TODO 组织于 2021 年执行的 OSPO 调查中,得出了以下的关键发现:
* 教育企业理解 OSPO 如何为他们带来益处的机会仍然巨大。
* OSPO 对其赞助方的软件实践有显著的积极影响,但影响的具体效果并因组织规模的大小而异。
* 那些有意设立 OSPO 的公司,他们期望 OSPO 能提升创新,但策略设立及预算力度仍然是实现目标的主要挑战。
* 调查参与者中近半数尚未设立 OSPO 的人认为 OSPO 将有助于他们公司的发展,然而在那些认为 OSPO 无助于公司发展的人群中,有 35% 的人还未对此事有所考虑。
* 27% 的调查参与者表示,一家公司对开源参与的程度会深刻影响他们组织的购买决策。
如今,在构建任何软件解决方案时,对开源软件的依赖几乎是无法避免的。然而,开源许可证相关的潜在风险也不容忽视。因此,我们需要一套策略性的流程来有效解决使用开源组件带来的合规性问题。
通过建立一支集中的专业团队OSPO 能帮助公司确立规范的开源文化让员工了解并熟悉与组织内开源使用相关的所有事宜。此外OSPO 还可以发挥引导作用,吸纳行业内的顶级人才,这无疑将对实现商业目标产生积极影响。
*题图MJ/9a3e106d-0710-4dd7-b278-ef1056c5c5ab*
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/08/why-companies-need-to-set-up-an-open-source-program-office/
作者:[Sakshi Sharma][a]
选题:[lkxed][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/sakshi-sharma/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/07/Figure-1-OSPO-prevalence-by-industry-2018-2021-2.jpg
[0]: https://img.linux.net.cn/data/attachment/album/202311/01/232800a8c8bk3b83rtbn6x.jpg

View File

@ -0,0 +1,132 @@
[#]: subject: "Artificial Intelligence: Explaining the Basics"
[#]: via: "https://www.opensourceforu.com/2022/08/artificial-intelligence-explaining-the-basics/"
[#]: author: "Deepu Benson https://www.opensourceforu.com/author/deepu-benson/"
[#]: collector: "lkxed"
[#]: translator: "toknow-gh"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16326-1.html"
人工智能教程(一):基础知识
======
![][0]
> 如果你是关注计算机领域最新趋势的学生或从业者,你应该听说过人工智能、数据科学、机器学习、深度学习等术语。作为人工智能系列文章的第一篇,本文将解释这些术语,并搭建一个帮助初学者入门的简易教学平台。
如今,计算机科学领域的学生和从业者绝对有必要了解<ruby>人工智能<rt>artificial intelligence</rt></ruby><ruby>数据科学<rt>data science</rt></ruby><ruby>机器学习<rt>machine learning</rt></ruby><ruby>深度学习<rt>deep learning</rt></ruby>方面的基本知识。但是应该从哪里开始呢?
为了找到答案,我浏览了大量人工智能的教材和教程。它们有的从大量数学理论开始,有的用编程语言无关的方式(不要求你了解某一门特定的编程语言)讲解,有的假设你是线性代数、概率论和统计学专家。在很大程度上,它们都很有用。但它们都没有回答最重要的问题:真正的初学者应该从哪里开始学习人工智能?
开始学习人工智能的方式多种多样,但是我对它们各有担忧。涉及太多的数学会让人分心,但如果数学介绍得太少就好像驾驶员不知道汽车引擎在哪里一样。对于未来的人工智能工程师和数据科学家来说,从进阶概念开始讲解是最有效率的方式,因为他们精通线性代数、概率论和统计学。如果从基础知识开始,然后在中间某个地方结束也可以,只要学员想要在这里结束学习。考虑到所有这些事实,我认为初学者的人工智能教程应该从基础知识开始,并以一个实际的人工智能项目结束。这个项目可能很小,但是在相同任务上它将会超越任何传统项目。
本系列将从最基础的知识讲到中等水平内容。除了讨论人工智能,我还希望对相关的话题进行一些澄清,因为人们对人工智能、机器学习、数据科学等术语有很多困惑。人工智能程序是必要的,因为我们每天会产生海量的数据。根据互联网上查询到的结果,我们每天大约会产生 2.5x10<sup>18</sup> 字节的数据。但是,这些数据中的大多数与我们完全无关,包括大量没有价值的 YouTube 视频,不经思考就发送的电子邮件,琐碎的新闻报道等等。然而,这片浩瀚的数据海洋中同样蕴含着无价的宝贵知识。传统软件无法完成处理这些数据的艰巨任务。人工智能是少数能够应对这种信息过载的技术之一。
当谈到到人工智能时,我们还需要区分事实和假象。我记得几年前听一位人工智能专家的演讲。他讲述了一个人工智能图像识别系统,它能近乎绝对准确地分辨西伯利亚雪橇犬和西伯利亚雪狼的图像。在互联网上搜索一下,你会看到这两种动物有多么相似。如果这个系统确实那么准确,它将是人工智能的奇迹。可惜的是,事实并非如此。该图像识别系统只是对图像的背景进行了分类。西伯利亚雪橇犬是家养动物,它的图像背景中几乎总会有一些矩形或圆形的物体。而西伯利亚雪狼是野生动物,它所在的背景中有雪。这些例子导致近年来人们对人工智能提出了准确性担保要求。
确实,最近几年人工智能展现了一些真正的力量。举个简单例子就是 YouTube、Amazon 等网站的推荐系统。很多时候我惊讶于它们的推荐结果,就好像它们会读心术一样。然而不论这些推荐的质量如何,“人工智能到底是好是坏?”都是一个很热门话题。我认为,一个像《终结者》中机器有意识地攻击人类的未来还遥遥无期。然而,前面那句话中的“有意识地”一词非常重要。目前的人工智能系统可能发生故障,并且意外地伤害到人类。但是,许多号称具有人工智能能力的系统实际上只是包含大量分支和循环的常规软件。因此目前可以安全地说,我们还没有在日常生活中看到人工智能的真正威力。不论是好的影响(如治愈癌症),还是坏的影响(合成的世界领导人视频导致的暴动和战争),我们都只能拭目以待了。就个人而言,我相信人工智能是一种福祉,并将大大提高未来几代人的生活质量。
### 什么是人工智能?
在我们进一步探讨之前让我们试着理解人工智能AI、机器学习ML、深度学习DL、数据科学DS等之间的联系和区别。这些术语经常被误用为同义词。图 1 表示了人工智能、机器学习、深度学习和数据科学之间的关系。当然这不是唯一的划分方式,你可能会看到其它的划分图。但在我看来,图 1 是最贴切的,它能够最大程度地概括这些领域之间关系。
![图 1人工智能体系结构和数据科学][1]
在本系列的第一篇文章中,我不会对每个术语定义进行精确的定义。我认为在现阶段,精确地定义它们是适得其反的,是浪费时间。但在后续的文章中,我们将重新讨论这些术语并正式定义它们。目前我们可以暂时把人工智能看作是可以在某种程度上模仿人类智能的程序。那人类智能又是指什么呢?
想象一下你的人工智能程序是一个一岁大的婴儿。这个宝宝会通过听周围人说话来学习母语。他/她将很快学会识别形状,颜色,物体等,没有任何困难。此外,他/她将能够对周围人的情绪做出反应。例如,任何一个三岁的婴儿都知道如何用甜言蜜语让父母给他/她巧克力和棒棒糖。同样,人工智能程序也将能够感知并适应环境,就像婴儿一样。然而,这种真正的人工智能只能在遥远的未来实现。
图 1 显示机器学习是人工智能的真子集,它也是实现人工智能系统的技术之一。机器学习是使用大量数据来训练程序的技术,以便有效地执行必要的任务。它的准确性随着训练集的增大而增加。请注意,还有其它技术用于开发人工智能系统,如基于布尔逻辑的系统,基于模糊逻辑的系统,基于遗传编程的系统等。然而,如今机器学习是实现人工智能系统的最主流的技术。图 1 还显示深度学习是机器学习的真子集,它只是众多机器学习技术中的一种。但目前实际上大多数严肃的机器学习技术都用到了深度学习。在这一点上,我甚至避免尝试定义深度学习。请记住,深度学习涉及到使用大型人工神经网络。
那数据科学(图 1 中的红圈)是做什么的呢?数据科学是计算机科学/数学领域中的一门处理和解读大规模数据的学科。我说的“大”,有多大呢?早在 2010 年Facebook 等一些企业巨头就声称它们的服务器可以处理几 Pb 的数据。当我们说大数据时,通常指的是 Tb 或 Pb 级的数据规模,而不是 Gb 级的。许多数据科学应用涉及人工智能、机器学习和深度学习技术的使用。因此,当我们讨论人工智能时,很难不提到数据科学。数据科学也使用很多传统的编程和数据库管理技术,比如使用 Apache Hadoop 进行大数据分析。
本系列的讨论将主要集中在人工智能和机器学习上,并涉及数据科学。
### 教学环境搭建
在表明了本系列文章的主题后,现在说说本教程的前置条件。你需要一台 Linux 电脑(当然 Windows 或 macOS 机器也可以,只是在一些安装步骤上可能需要额外的协助),并了解基本的数学和计算机编程知识。我希望在细心地阅读本系列文章后,你会感受到人工智能的强大。
用编程语言无关的方式来学习人工智能是可能的但本系列将基于一门编程语言并涉及大量的编程。在决定使用哪一门编程语言之前我们先来回顾一下人工智能、机器学习、深度学习和数据科学领域流行的编程语言。Lisp 是一种函数式编程语言它是最早用于开发人工智能程序的语言之一。Prolog 是一种逻辑编程语言,在 20 世纪 70 年代也被用于同样的目的。我们将在接下来的介绍人工智能历史的文章中更详细地介绍 Lisp 和 Prolog。
如今Java、C、C++、Scala、Haskell、MATLAB、R、Julia 等编程语言也被用于开发人工智能程序。Python 在人工智能程序开发中被广泛使用,这使我们选择它作为本教程的编程语言。但我必须声明,从这里开始做的选择(更确切地说,是我替你做的选择),主要考虑的因素是易用性、受欢迎程度、(在少数情况下)我自己对该软件/技术的适应和熟悉程度、对本教程效率的提升。但同时,我也鼓励你尝试其它的编程语言、软件和工具。也许从长远来看,它们对你来说可能是更好的选择。
现在我们需要立即做出另一个选择:使用 Python 2 还是 Python 3考虑到本系列有许多年轻的读者他们还有漫长的职业生涯我将选择使用 Python 3。在 Ubuntu 系统终端中执行命令 `sudo apt install python3` 安装最新版本的 Python 3你的系统中可能已经安装了 Python 3。在其它 Linux 发行版、Windows 和 macOS 机器上安装 Python 3 也非常容易。执行下面的命令查看安装的 Python 3 的版本:
```
python3 --version
Python 3.8.10
```
在后续的教程中,我们需要安装许多 Python 包,所以需要一个包管理器。目前主流的包管理器有 pip、Conda 和 Mamba 等。我选择 pip 作为包在本教程的管理器。它相对简单,也是推荐的 Python 安装工具。我认为 Conda 和 Mamba 是比 pip 更强大的工具,你可以尝试一下它们。运行命令 `sudo apt install python3-pip` 将在 Ubuntu 系统中安装 pip。pip、Conda 和 Mamba 是跨平台软件,它们可以安装在 Linux、Windows 和 macOS 系统上。运行命令 `pip3 --version` 查看系统中安装的 pip 版本,如下所示:
```
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
```
现在我们需要一个 Python 集成开发环境IDE。IDE 能帮助程序员更容易地编写、编译、调试和执行代码。PyCharm、IDLE、Spyder 等都是流行的 Python IDE。然而由于我们的主要目的是开发人工智能和数据科学程序这里考虑另外两个强有力的竞争者 —— JupyterLab 和谷歌 Colab。严格地说它们不仅仅是 IDE它们是非常强大的基于网络的交互式开发环境。两者都可以在网络浏览器上工作并提供强大的功能。JupyterLab 是由非营利组织 Project Jupyter 支持的免费开源软件。谷歌 Colab 遵循 <ruby>免费增值<rt>freemium</rt></ruby> 模式,即基本功能免费,附加功能收费。我认为谷歌 Colab 比 JupyterLab 功能更强大。但是由于谷歌 Colab 的免费增值模式,以及我相对缺乏谷歌 Colab 的使用经验,在本教程中我选择 JupyterLab。但我仍然强烈建议你去了解一下谷歌 Colab。
可以使用命令 `pip3 install JupyterLab` 在本地安装 JupyterLab。执行命令 `jupyter-lab` 将在系统的默认网络浏览器中运行 JupyterLab。Project Jupyter 还提供一个更老的类似系统称为Jupyter Notebook。可以通过 `pip3 install Notebook` 命令在本地安装 Jupyter Notebook用`Jupyter Notebook` 运行它。但 Jupyter Notebook 的功能不如 JupyterLab 强大,且官方宣布它最终会被 JupyterLab 取代。在本教程中,我们将在合适的阶段使用 JupyterLab。但在开始阶段我们将使用 Linux 终端来运行 Python 程序,因此急需的是包管理器 pip。
Anaconda 是一个非常流行的 Python 和 R 编程语言发行版,它主要用于机器学习和数据科学领域。作为未来的人工智能工程师和数据科学家,熟悉使用 Anaconda 也是一个不错的选择。
现在我们需要确定最重要的一点 —— 本教程的风格。有大量人工智能开发相关的 Python 库,比如 NumPy、SciPy、Pandas、Matplotlib、Seaborn、TensorFlow、Keras、Scikit-learn 和 PyTorch。许多关于人工智能、机器学习和数据科学的教材和教程都是基于对其中一个或多个库的完整讲解。尽管对特定包的功能进行这样的覆盖讲解是一种高效的方式但我的教程是更面向数学的。我们将首先讨论开发人工智能程序所需的数学概念然后再介绍需要的 Python 基础知识和 Python 库。我们会为了探索实现这些数学概念所需的特性而不断回顾这些 Python 库。有时我也会要求你自己学习一些 Python 和数学的基本概念。
在完成这些准备工作之后,如果我们就在这里结束,任何代码或数学概念都不讲,那将是一种罪过。因此,我们将继续学习人工智能和机器学习中最重要的数学概念:向量和矩阵。
#### 向量和矩阵
矩阵是按行和列排列的数字、符号或数学表达式构成的矩形阵列。图 2 显示了一个 2 × 3 矩阵,它有 2 行和 3 列。如果你熟悉编程,在许多流行的编程语言中这个矩阵可以表示为一个二维数组。只有一行的矩阵称为行向量,只有一列的矩阵称为列向量。<!-- $[11, 22, 33]$ --> <img style="transform: translateY(0.1em); background: white;" src="https://latex.codecogs.com/png.latex?%5B11%2C%2022%2C%2033%5D"> 就是一个行向量。
![图 2一个: A 2 × 3 的矩阵][2]
为什么矩阵和向量在人工智能和机器学习中如此重要呢?人工智能和机器学习中广泛使用线性代数,而矩阵和向量是线性代数的核心。几个世纪以来,数学家们一直在研究矩阵和向量的性质和应用。高斯、欧拉、莱布尼茨、凯利、克莱姆和汉密尔顿等数学家在线性代数和矩阵论领域都有以他们的名字命名的定理。多年来,线性代数中发展出了许多分析矩阵和向量性质的技术。
复杂的数据通常可以很容易用向量或矩阵来表示。举一个简单的例子,从一个人的医疗记录中,可以得到详细的年龄、身高(厘米)、体重(公斤)、收缩压、舒张压和空腹血糖(毫克/分升)。这些信息可以很容易用行向量来表示,<!-- $[60, 160, 90, 130, 95, 160]$ --> <img style="transform: translateY(0.1em); background: white;" src="https://latex.codecogs.com/png.latex?%5B60%2C%20160%2C%2090%2C%20130%2C%2095%2C%20160%5D"> 。人工智能和机器学习的第一个挑战来了:如果医疗记录有十亿条怎么办?即使动用成千上万的专业人员从中手动提取数据,这项任务也是无法完成的。因此,人工智能和机器学习利用程序来提取数据。
人工智能和机器学习的第二个挑战是数据解读。这是一个广阔的领域,有许多技术值得探索。我将在后续文章中介绍相关内容。人工智能和机器学习应用除了面临数学/计算方面的挑战外,还面临硬件方面的挑战。随着处理的数据量的增加,数据存储、处理器速度、功耗等也成为人工智能应用面临的重要挑战。但现在让我们先抛开这些挑战,动手编写第一行人工智能代码。
我们将编写一个简单的 Python 脚本,用来将两个向量相加。我们将用到名为 NumPy 的 Python 库,它支持多维矩阵(数组)的数学运算。用命令 `pip3 install numpy` 为 Python 3 安装 NumPy 包。如果你使用的是 JupyterLab、谷歌 Colab 或 Anaconda那么 NumPy 应该已经被预安装了。但是为了演示,在本系列的前几篇文章中,我们都将在 Linux 终端上操作。在 Linux 终端上执行命令 ` python3` 进入 Python 控制台。在这个控制台中可以逐行执行 Python 代码。图 3 展示了在控制台中逐行运行 Python 代码,将两个向量相加,并输出结果。
![图 3两个向量求和的 Python 代码][3]
首先让我们试着逐行理解这些代码。由于本教程假定的编程经验很少所以我将代码行标记为【基本】或【AI】。标记为【基本】的行是经典 Python 代码标记为【AI】的行是用于开发人工智能程序的代码。通过区分基本和进阶的 Python 代码,我希望具有基本知识和中级编程技能的程序员都能够高效地使用本教程。
```
import numpy as np #【基本】
a = np.array([11, 22, 33]) #【AI】
b = np.array([44, 55, 66]) #【AI】
c = np.add(a, b) #【AI】
print(c) #【基本】
```
`import numpy as np` 导入 numpy 库并将其命名为 `np`。Python 中的 `import` 语句类似于在 C/C++ 用 `#include` 来包含头文件,或者在 Java 中用`import` 来使用包。
`a = np.array([11, 22, 33])``b = np.array([44, 55, 66]) ` 分别创建了名为 `a``b` 的一维数组(为了便于理解,目前假设向量等价于一维数组)。
`c = np.add(a, b)` 将向量 `a` 和`b` 相加,并将结果存储在名为 `c` 的向量中。当然,用 `a``b``c` 作为变量名是一种糟糕的编程实践,但数学家倾向于将向量命名为 <!-- $u$ --> <img style="transform: translateY(0.1em); background: white;" src="https://latex.codecogs.com/png.latex?u"><!-- $v$ --> <img style="transform: translateY(0.1em); background: white;" src="https://latex.codecogs.com/png.latex?v"><!-- $w$ --> <img style="transform: translateY(0.1em); background: white;" src="https://latex.codecogs.com/png.latex?w"> 等。如果你完全没有 Python 编程经验,请自行了解 Python 变量的相关知识。
`print(c)` 在终端上打印对象的值,即向量 `[55 77 99]`。你可以暂时这样理解向量相加, `c = [55=11+44 77=22+55 99=33+66]`。如果你想正式地了解向量和矩阵是如何相加的,但手头又没有相关的教材,我建议阅读维基百科上关于矩阵加法的文章。在网上搜索一下就会发现,用经典的 C/C++ 或 Java 程序来实现向量相加需要更多的代码。这说明 Python 很适合处理向量和矩阵。当我们执行越来越复杂的向量运算时Python 的强大将进一步显现。
在我们结束本文之前,我要做两个声明。第一,上面讨论的示例只处理了两个行向量(确切地说是 1 x 3 的矩阵)的相加,但真正的机器学习应用可能要处理 1000000 X 1000000 的矩阵。但不用担心,通过练习和耐心,我们将能够处理这些问题。第二,本文中给出许多定义包含了粗略的简化和不充分的描述。但如前面所说,在本系列结束之前,我将给这些模糊的术语下一个正式的定义。
现在我们该结束这篇文章了。我希望所有人都安装文中提到的必要软件,并运行本文中的代码。在下一篇文章中,我们将首先讨论人工智能的历史、范畴和未来,然后深入探讨线性代数的支柱——矩阵论。
*题图MJ/25071901-abc4-4144-bf27-4d98bb1d9301/*
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/08/artificial-intelligence-explaining-the-basics/
作者:[Deepu Benson][a]
选题:[lkxed][b]
译者:[toknow-gh](https://github.com/toknow-gh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/deepu-benson/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-1-The-AI-hierarchy-and-data-science.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-2-A-2-X-3-matrix.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-3-Python-code-to-find-the-sum-of-two-row-vectors.jpg
[0]: https://img.linux.net.cn/data/attachment/album/202310/29/155650e3vap2on6vwrrmpx.jpg

View File

@ -0,0 +1,87 @@
[#]: subject: "Security buzzwords to avoid and what to say instead"
[#]: via: "https://opensource.com/article/22/9/security-buzzword-alternatives"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16325-1.html"
“安全”这个热词:应避免使用还是该更直接了当?
======
![][0]
> 思考一下,如何在你的开源项目中真正定义安全性吧。
科技行业以创造“热词”而小有名气,当然,其他行业也是如此。譬如,“故事驱动”和“轻规则”是当下流行的桌游概念,“解构”的汉堡和墨西哥卷饼在高级餐厅颇受欢迎。然而,技术热词的问题在于,它们可能直接影响你的生活。当某人标榜应用程序“安全”,以此来吸引你使用他们的产品,产品实际上是在暗示一种承诺:“安全”的含义就是它是安全的,它值得你的使用与信任。但问题是,“安全”这个词可能指的是许多事情,技术行业常将它用作一个过于泛化的术语,以至于它逐渐失去了实际含义。
由于“安全”一词可能含义丰富,也可能一无是处,使用它就需要慎之又慎。事实上,最好是尽量避免使用这个词,取而代之的是,诉诸你真正要表达的东西。
### 当 “安全” 意味着加密
有时候,“安全” 会被作为 *加密* 的非常不明确的简短表述。在这种情况下,“安全” 指的是,对于外部观察者想要窃听你的数据,要经过一定的困难。
**避免这样表述:**“本网站稳如磐石且安全无忧。”
听起来很棒?你可能会想象一个拥有多重二次验证、零知识证明数据存储以及坚决的匿名策略的网站。
**你可以这么说:**“本网站承诺有 99% 的在线时间,并且其流量都通过 SSL 进行加密和验证。”
这样一来,承诺的含义变得清晰了,同时也明确了实现 “安全” 的方法(即使用 SSL以及 “安全“ 的作用范围是什么。
注意,这里并未对隐私或匿名做出任何明确的承诺。
### 当 “安全” 意味着访问限制
有时,“安全” 这个词是指应用程序或设备的访问权限。如果没有明确的解释,“安全” 可能涵盖从无效的“隐蔽即安全”模式,到简单的 htaccess 密码,直到生物识别扫描器等各种概念。
**避免这样表述:**“我们已经为你防护好了系统。”
**你可以这么说:**“我们的系统采用了二步验证法。”
### 当 “安全” 意味着数据存储
“安全” 这个词也可以指你的数据在服务器或设备上的储存方法。
**避免这样表述:**“这个设备在数据存储上考虑了安全因素。”
**你可以这么说:**“这个设备利用全盘加密技术来保护你的数据。”
当提到远程存储时,“安全” 可能更多指的是谁可以访问存储数据。
**避免这样表述:**“你的数据是安全的。”
**你可以这么说:**“你的数据经过 PGP 加密,仅你持有私钥。”
### 当 “安全” 意味着隐私
今天,“隐私” 一词几乎和 “安全” 一样宽泛且模糊。一方面,“安全” 似乎必然就涉及 “隐私”,然而,这仅在 '安全' 有明确定义时才成立。是因为设有密码阻止外人进入所以称之为私有吗?还是因为数据已加密且仅你拥有密钥所以归为私有?又或者,由于存储你数据的厂商除了 IP 地址外对你一无所知,这才算是私有?光是口头声明 “隐私”,就像未经说明就声明 “安全” 一样,是不够的。
**避免这样表述:**“你的数据在我们这里是安全的。”
**你可以这么说:**“你的数据经 PGP 加密,且只有你拥有私钥。我们不需要你的任何个人信息,唯一能识别你的只有你的 IP 地址。”
一些网站会声明 IP 地址在日志中保留期限,及非经法律授权绝不向执法部门交出用户数据等诸多承诺。虽然这些并不属于技术 “安全” 的范畴,但它们全都涉及的是信任度,你不能将这些看作是技术规格。
### 明确所说
科技是个复杂的话题,极易引发混淆。沟通是至关重要的,虽然有时候简拼和专有名词在某些场合可能管用,但通常来说,讲明白总是比较好的。当你对你的项目的 “安全” 感到自豪,不要只用模糊的词语进行简述。向其他人明确你具体做了什么来保护你的用户,同时也要明确你认为哪些事物已超出你的考量范围,并要经常进行这样的沟通。“安全” 是个好特点,但它的涵盖面过广,所以请勿畏于夸赞自己在某个具体方向上的特别之处。
*题图MJ/b8cc54ee-5556-4106-b9fa-b08539452aa7*
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/security-buzzword-alternatives
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/security-lock-password.jpg
[0]: https://img.linux.net.cn/data/attachment/album/202310/28/095718zso6aaitoyyv4ain.jpg

View File

@ -0,0 +1,517 @@
[#]: subject: "Making a DNS query in Ruby from scratch"
[#]: via: "https://jvns.ca/blog/2022/11/06/making-a-dns-query-in-ruby-from-scratch/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972"
[#]: translator: "Drwhooooo"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16312-1.html"
从零开始,运用 Ruby 语言创建一个 DNS 查询
======
![][0]
大家好!前段时间我写了一篇关于“[如何用 Go 语言建立一个简易的 DNS 解析器][1]”的帖子。
那篇帖子里我没写有关“如何生成以及解析 DNS 查询请求”的内容,因为我觉得这很无聊,不过一些伙计指出他们不知道如何解析和生成 DNS 查询请求,并且对此很感兴趣。
我开始好奇了——解析 DNS _能_ 花多大功夫?事实证明,编写一段 120 行精巧的 Ruby 语言代码组成的程序就可以做到,这并不是很困难。
所以,在这里有一个如何生成 DNS 查询请求,以及如何解析 DNS 响应报文的速成教学!我们会用 Ruby 语言完成这项任务,主要是因为不久以后我将在一场 Ruby 语言大会上发表观点,而这篇博客帖的部分内容是为了那场演讲做准备的。:)
(我尽量让不懂 Ruby 的人也能读懂,我只使用了非常基础的 Ruby 语言代码。)
最后,我们就能制作一个非常简易的 Ruby 版本的 `dig` 工具,能够查找域名,就像这样:
```
$ ruby dig.rb example.com
example.com 20314 A 93.184.216.34
```
整个程序大概 120 行左右,所以 _并不_ 算多。(如果你想略过讲解,单纯想去读代码的话,最终程序在这里:[dig.rb][2]。)
我们不会去实现之前帖中所说的“一个 DNS 解析器是如何运作的?”,因为我们已经做过了。
那么我们开始吧!
如果你想从头开始弄明白 DNS 查询是如何格式化的,我将尝试解释如何自己弄明白其中的一些东西。大多数情况下的答案是“用 Wireshark 去解包”和“阅读 RFC 1035即 DNS 的规范”。
## 生成 DNS 查询请求
### 步骤一:打开一个 UDP 套接字
我们需要实际发送我们的 DNS 查询,因此我们就需要打开一个 UDP 套接字。我们会将我们的 DNS 查询发送至 `8.8.8.8`,即谷歌的服务器。
下面是用于建立与 `8.8.8.8` 的 UDP 连接,端口为 53DNS 端口)的代码。
```
require 'socket'
sock = UDPSocket.new
sock.bind('0.0.0.0', 12345)
sock.connect('8.8.8.8', 53)
```
#### 关于 UDP 的说明
关于 UDP我不想说太多但是我要说的是计算机网络的基础单位是“<ruby>数据包<rt>packet</rt></ruby>”(即一串字节),而在这个程序中,我们要做的是计算机网络中最简单的事情:发送 1 个数据包,并接收 1 个数据包作为响应。
所以 UDP 是一个传递数据包的最简单的方法。
它是发送 DNS 查询最常用的方法,不过你还可以用 TCP 或者 DNS-over-HTTPS。
### 步骤二:从 Wireshark 复制一个 DNS 查询
下一步:假设我们都不知道 DNS 是如何运作的,但我们还是想尽快发送一个能运行的 DNS 查询。获取 DNS 查询并确保 UDP 连接正常工作的最简单方法就是复制一个已经正常工作的 DNS 查询!
所以这就是我们接下来要做的,使用 Wireshark (一个绝赞的数据包分析工具)。
我的操作大致如下:
1. 打开 Wireshark点击 “<ruby>捕获<rt>capture</rt></ruby>” 按钮。
2. 在搜索栏输入 `udp.port == 53` 作为筛选条件,然后按下回车。
3. 在我的终端运行 `ping example.com`(用来生成一个 DNS 查询)。
4. 点击 DNS 查询(显示 “Standard query A example.com”
“A”查询类型“example.com”域名“Standard query”查询类型描述
5. 右键点击位于左下角面板上的 “<ruby>域名系统(查询)<rt>Domain Name System (query)</rt></ruby>”。
6. 点击 “<ruby>复制<rt>Copy</rt></ruby>” ——> “<ruby>作为十六进制流<rt>as a hex stream</rt></ruby>”。
7. 现在
`b96201000001000000000000076578616d706c6503636f6d0000010001`
就放到了我的剪贴板上,之后会用在我的 Ruby 程序里。好欸!
### 步骤三:解析 16 进制数据流并发送 DNS 查询
现在我们能够发送我们的 DNS 查询到 `8.8.8.8` 了!就像这样,我们只需要再加 5 行代码:
```
hex_string = "b96201000001000000000000076578616d706c6503636f6d0000010001"
bytes = [hex_string].pack('H*')
sock.send(bytes, 0)
# get the reply
reply, _ = sock.recvfrom(1024)
puts reply.unpack('H*')
```
`[hex_string].pack('H*')` 意思就是将我们的 16 位字符串转译成一个字节串。此时我们不知道这组数据到底是什么意思,但是很快我们就会知道了。
我们还可以借此机会运用 `tcpdump` ,确认程序是否正常进行以及发送有效数据。我是这么做的:
1. 在一个终端选项卡下执行 `sudo tcpdump -ni any port 53 and host 8.8.8.8` 命令
2. 在另一个不同的终端指标卡下,运行 [这个程序][3]`ruby dns-1.rb`
以下是输出结果:
```
$ sudo tcpdump -ni any port 53 and host 8.8.8.8
08:50:28.287440 IP 192.168.1.174.12345 > 8.8.8.8.53: 47458+ A? example.com. (29)
08:50:28.312043 IP 8.8.8.8.53 > 192.168.1.174.12345: 47458 1/0/0 A 93.184.216.34 (45)
```
非常棒 —— 我们可以看到 DNS 请求(”这个 `example.com` 的 IP 地址在哪里以及响应“在93.184.216.34”)。所以一切运行正常。现在只需要(你懂的)—— 搞清我们是如何生成并解析这组数据的。
### 步骤四:学一点点 DNS 查询的格式
现在我们有一个关于 `example.com` 的 DNS 查询,让我们了解它的含义。
下方是我们的查询16 位进制格式):
```
b96201000001000000000000076578616d706c6503636f6d0000010001
```
如果你在 Wireshark 上搜索,你就能看见这个查询它由两部分组成:
* **请求头**`b96201000001000000000000`
* **语句本身**`076578616d706c6503636f6d0000010001`
### 步骤五:制作请求头
我们这一步的目标就是制作字节串 `b96201000001000000000000`(借助一个 Ruby 函数,而不是把它硬编码出来)。
LCTT 译注:<ruby>硬编码<rt>hardcode</rt></ruby> 指在软件实现上,将输出或输入的相关参数(例如:路径、输出的形式或格式)直接以**常量**的方式撰写在源代码中,而非在运行期间由外界指定的设置、资源、数据或格式做出适当回应。)
那么:请求头是 12 个字节。那些个 12 字节到底意味着什么呢?如果你在 Wireshark 里看看(亦或者阅读 [RFC-1035][4]),你就能理解:它是由 6 个 2 字节大小的数字串联在一起组成的。
这六个数字分别对应查询 ID、标志以及数据包内的问题计数、回答资源记录数、权威名称服务器记录数、附加资源记录数。
我们还不需要在意这些都是些什么东西 —— 我们只需要把这六个数字输进去就行。
但所幸我们知道该输哪六位数,因为我们就是为了直观地生成字符串 `b96201000001000000000000`
所以这里有一个制作请求头的函数(注意:这里没有 `return`,因为在 Ruby 语言里,如果处在函数最后一行是不需要写 `return` 语句的):
```
def make_question_header(query_id)
# id, flags, num questions, num answers, num auth, num additional
[query_id, 0x0100, 0x0001, 0x0000, 0x0000, 0x0000].pack('nnnnnn')
end
```
上面内容非常的短,主要因为除了查询 ID ,其余所有内容都由我们硬编码写了出来。
#### 什么是 `nnnnnn`?
可能能想知道 `.pack('nnnnnn')` 中的 `nnnnnn` 是个什么意思。那是一个向 `.pack()` 函数解释如何将那个 6 个数字组成的数据转换成一个字节串的一个格式字符串。
`.pack` 的文档在 [这里][5],其中描述了 `n` 的含义其实是“将其表示为” 16 位无符号、网络(大端序)字节序’”。
LCTT 译注:<ruby>大端序<rt>Big-endian</rt></ruby>:指将高位字节存储在低地址,低位字节存储在高地址的方式。)
16 个位等同于 2 字节,同时我们需要用网络字节序,因为这属于计算机网络范畴。我不会再去解释什么是字节序了(尽管我确实有 [一幅自制漫画尝试去描述它][6])。
## 测试请求头代码
让我们快速检测一下我们的 `make_question_header` 函数运行情况。
```
puts make_question_header(0xb962) == ["b96201000001000000000000"].pack("H*")
```
这里运行后输出 `true` 的话,我们就成功了。
好了我们接着继续。
### 步骤六:为域名进行编码
下一步我们需要生成 **问题本身**(“`example.com` 的 IP 是什么?”)。这里有三个部分:
* **域名**(比如说 `example.com`
* **查询类型**(比如说 `A` 代表 “IPv4 **A**ddress”
* **查询类**(总是一样的,`1` 代表 **IN**ternet
最麻烦的就是域名,让我们写个函数对付这个。
`example.com` 以 16 进制被编码进一个 DNS 查询中,如 `076578616d706c6503636f6d00`。这有什么含义吗?
如果我们把这些字节以 ASCII 值翻译出来,结果会是这样:
```
076578616d706c6503636f6d00
7 e x a m p l e 3 c o m 0
```
因此,每个段(如 `example`)的前面都会显示它的长度(`7`)。
下面是有关将 `example.com` 翻译成 `7 e x a m p l e 3 c o m 0` 的 Ruby 代码:
```
def encode_domain_name(domain)
domain
.split(".")
.map { |x| x.length.chr + x }
.join + "\0"
end
```
除此之外,,要完成问题部分的生成,我们只需要在域名结尾追加上(查询)的类型和类。
### 步骤七:编写 make_dns_query
下面是制作一个 DNS 查询的最终函数:
```
def make_dns_query(domain, type)
query_id = rand(65535)
header = make_question_header(query_id)
question = encode_domain_name(domain) + [type, 1].pack('nn')
header + question
end
```
这是目前我们写的所有代码 [dns-2.rb][7] —— 目前仅 29 行。
## 接下来是解析的阶段
现在我尝试去解析一个 DNS 查询,我们到了硬核的部分:解析。同样的,我们会将其分成不同部分:
* 解析一个 DNS 的请求头
* 解析一个 DNS 的名称
* 解析一个 DNS 的记录
这几个部分中最难的(可能跟你想的不一样)就是:“解析一个 DNS 的名称”。
### 步骤八:解析 DNS 的请求头
让我们先从最简单的部分开始DNS 的请求头。我们之前已经讲过关于它那六个数字是如何串联在一起的了。
那么我们现在要做的就是:
* 读其首部 12 个字节
* 将其转换成一个由 6 个数字组成的数组
* 为方便起见,将这些数字放入一个类中
以下是具体进行工作的 Ruby 代码:
```
class DNSHeader
attr_reader :id, :flags, :num_questions, :num_answers, :num_auth, :num_additional
def initialize(buf)
hdr = buf.read(12)
@id, @flags, @num_questions, @num_answers, @num_auth, @num_additional = hdr.unpack('nnnnnn')
end
end
```
注: `attr_reader` 是 Ruby 的一种说法,意思是“使这些实例变量可以作为方法使用”。所以我们可以调用 `header.flags` 来查看`@flags`变量。
我们也可以借助 `DNSheader(buf)` 调用这个,也不差。
让我们往最难的那一步挪挪:解析一个域名。
### 步骤九:解析一个域名
首先,让我们写其中的一部分:
```
def read_domain_name_wrong(buf)
domain = []
loop do
len = buf.read(1).unpack('C')[0]
break if len == 0
domain << buf.read(len)
end
domain.join('.')
end
```
这里会反复读取一个字节的数据,然后将该长度读入字符串,直到读取的长度为 0。
这里运行正常的话,我们在我们的 DNS 响应头第一次看见了域名(`example.com`)。
## 关于域名方面的麻烦:压缩!
但当 `example.com` 第二次出现的时候,我们遇到了麻烦 —— 在 Wireshark 中,它报告上显示输出的域的值为含糊不清的 2 个字节的 `c00c`
这种情况就是所谓的 **DNS 域名压缩**,如果我们想解析任何 DNS 响应我们就要先把这个实现完。
幸运的是,这没**那么**难。这里 `c00c` 的含义就是:
* 前两个比特(`0b11.....`)意思是“前面有 DNS 域名压缩!”
* 而余下的 14 比特是一个整数。这种情况下这个整数是 `12``0x0c`),意思是“返回至数据包中的第 12 个字节处,使用在那里找的域名”
如果你想阅读更多有关 DNS 域名压缩之类的内容。我找到了相关更容易让你理解这方面内容的文章: [关于 DNS RFC 的释义][8]。
### 步骤十:实现 DNS 域名压缩
因此,我们需要一个更复杂的 `read_domain_name` 函数。
如下所示:
```
domain = []
loop do
len = buf.read(1).unpack('C')[0]
break if len == 0
if len & 0b11000000 == 0b11000000
# weird case: DNS compression!
second_byte = buf.read(1).unpack('C')[0]
offset = ((len & 0x3f) << 8) + second_byte
old_pos = buf.pos
buf.pos = offset
domain << read_domain_name(buf)
buf.pos = old_pos
break
else
# normal case
domain << buf.read(len)
end
end
domain.join('.')
```
这里具体是:
* 如果前两个位为 `0b11`,那么我们就需要做 DNS 域名压缩。那么:
* 读取第二个字节并用一点儿运算将其转化为偏移量。
* 在缓冲区保存当前位置。
* 在我们计算偏移量的位置上读取域名
* 在缓冲区存储我们的位置。
可能看起来很乱,但是这是解析 DNS 响应的部分中最难的一处了,我们快搞定了!
#### 一个关于 DNS 压缩的漏洞
有些人可能会说,有恶意行为者可以借助这个代码,通过一个带 DNS 压缩条目的 DNS 响应指向这个响应本身,这样 `read_domain_name` 就会陷入无限循环。我才不会改进它(这个代码已经够复杂了好吗!)但一个真正的 DNS 解析器确实会更巧妙地处理它。比如,这里有个 [能够避免在 miekg/dns 中陷入无限循环的代码][9]。
如果这是一个真正的 DNS 解析器,可能还有其他一些边缘情况会造成问题。
### 步骤十一:解析一个 DNS 查询
你可能在想:“为什么我们需要解析一个 DNS 查询?这是一个响应啊!”
但每一个 DNS 响应包含它自己的原始查询,所以我们有必要去解析它。
这是解析 DNS 查询的代码:
```
class DNSQuery
attr_reader :domain, :type, :cls
def initialize(buf)
@domain = read_domain_name(buf)
@type, @cls = buf.read(4).unpack('nn')
end
end
```
内容不是太多:类型和类各占 2 个字节。
### 步骤十二:解析一个 DNS 记录
最让人兴奋的部分 —— DNS 记录是我们的查询数据存放的地方!即这个 “rdata 区域”(“记录数据字段”)就是我们会在 DNS 查询对应的响应中获得的 IP 地址所驻留的地方。
代码如下:
```
class DNSRecord
attr_reader :name, :type, :class, :ttl, :rdlength, :rdata
def initialize(buf)
@name = read_domain_name(buf)
@type, @class, @ttl, @rdlength = buf.read(10).unpack('nnNn')
@rdata = buf.read(@rdlength)
end
```
我们还需要让这个 `rdata` 区域更加可读。记录数据字段的实际用途取决于记录类型 —— 比如一个“A” 记录就是一个四个字节的 IP 地址,而一个 “CNAME” 记录则是一个域名。
所以下面的代码可以让请求数据更可读:
```
def read_rdata(buf, length)
@type_name = TYPES[@type] || @type
if @type_name == "CNAME" or @type_name == "NS"
read_domain_name(buf)
elsif @type_name == "A"
buf.read(length).unpack('C*').join('.')
else
buf.read(length)
end
end
```
这个函数使用了 `TYPES` 这个哈希表将一个记录类型映射为一个更可读的名称:
```
TYPES = {
1 => "A",
2 => "NS",
5 => "CNAME",
# there are a lot more but we don't need them for this example
}
```
`read.rdata` 中最有趣的一部分可能就是这一行 `buf.read(length).unpack('C*').join('.')` —— 像是在说:“嘿!一个 IP 地址有 4 个字节,就将它转换成一组四个数字组成的数组,然后数字互相之间用 . 联个谊吧。”
### 步骤十三:解析 DNS 响应的收尾工作
现在我们正式准备好解析 DNS 响应了!
工作代码如下所示:
```
class DNSResponse
attr_reader :header, :queries, :answers, :authorities, :additionals
def initialize(bytes)
buf = StringIO.new(bytes)
@header = DNSHeader.new(buf)
@queries = (1..@header.num_questions).map { DNSQuery.new(buf) }
@answers = (1..@header.num_answers).map { DNSRecord.new(buf) }
@authorities = (1..@header.num_auth).map { DNSRecord.new(buf) }
@additionals = (1..@header.num_additional).map { DNSRecord.new(buf) }
end
end
```
这里大部分内容就是在调用之前我们写过的其他函数来协助解析 DNS 响应。
如果 `@header.num_answers` 的值为 2代码会使用了 `(1..@header.num_answers).map` 这个巧妙的结构创建一个包含两个 DNS 记录的数组。(这可能有点像 Ruby 魔法,但我就是觉得有趣,但愿不会影响可读性。)
我们可以把这段代码整合进我们的主函数中,就像这样:
```
sock.send(make_dns_query("example.com", 1), 0) # 1 is "A", for IP address
reply, _ = sock.recvfrom(1024)
response = DNSResponse.new(reply) # parse the response!!!
puts response.answers[0]
```
尽管输出结果看起来有点辣眼睛(类似于 `#<DNSRecord:0x00000001368e3118>`),所以我们需要编写一些好看的输出代码,提升它的可读性。
### 步骤十四:对于我们输出的 DNS 记录进行美化
我们需要向 DNS 记录增加一个 `.to_s` 字段,从而让它有一个更良好的字符串展示方式。而者只是做为一行方法的代码在 `DNSRecord` 中存在。
```
def to_s
"#{@name}\t\t#{@ttl}\t#{@type_name}\t#{@parsed_rdata}"
end
```
你可能也注意到了我忽略了 DNS 记录中的 `class` 区域。那是因为它总是相同的IN 表示 “internet”所以我觉得它是个多余的。虽然很多 DNS 工具(像真正的 `dig`)会输出 `class`
## 大功告成!
这是我们最终的主函数:
```
def main
# connect to google dns
sock = UDPSocket.new
sock.bind('0.0.0.0', 12345)
sock.connect('8.8.8.8', 53)
# send query
domain = ARGV[0]
sock.send(make_dns_query(domain, 1), 0)
# receive & parse response
reply, _ = sock.recvfrom(1024)
response = DNSResponse.new(reply)
response.answers.each do |record|
puts record
end
```
我不觉得我们还能再补充什么 —— 我们建立连接、发送一个查询、输出每一个回答,然后退出。完事儿!
```
$ ruby dig.rb example.com
example.com 18608 A 93.184.216.34
```
你可以在这里查看最终程序:[dig.rb][2]。可以根据你的喜好给它增加更多特性,就比如说:
* 为其他查询类型添加美化输出。
* 输出 DNS 响应时增加“授权”和“可追加”的选项
* 重试查询
* 确保我们看到的 DNS 响应匹配我们的查询ID 信息必须是对的上的!)
另外如果我在这篇文章中出现了什么错误,就 [在推特和我聊聊吧][10]。(我写的比较赶所以可能还是会有些错误)
*题图MJ/449d049d-6bdd-448b-a61d-17138f8551bc*
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2022/11/06/making-a-dns-query-in-ruby-from-scratch/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[Drwhooooo](https://github.com/Drwhooooo)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2022/02/01/a-dns-resolver-in-80-lines-of-go/
[2]: https://gist.github.com/jvns/1e5838a53520e45969687e2f90199770
[3]: https://gist.github.com/jvns/aa202b1edd97ae261715c806b2ba7d39
[4]: https://datatracker.ietf.org/doc/html/rfc1035#section-4.1.1
[5]: https://ruby-doc.org/core-3.0.0/Array.html#method-i-pack
[6]: https://wizardzines.com/comics/little-endian/
[7]: https://gist.github.com/jvns/3587ea0b4a2a6c20dcfd8bf653fc11d9
[8]: https://datatracker.ietf.org/doc/html/rfc1035#section-4.1.4
[9]: https://link.zhihu.com/?target=https%3A//github.com/miekg/dns/blob/b3dfea07155dbe4baafd90792c67b85a3bf5be23/msg.go%23L430-L435
[10]: https://twitter.com/b0rk
[0]: https://img.linux.net.cn/data/attachment/album/202310/24/155014kli69j43i021iwwl.jpg

View File

@ -0,0 +1,218 @@
[#]: subject: "Terminal Basics Series #1: Changing Directories in Linux Terminal"
[#]: via: "https://itsfoss.com/change-directories/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16304-1.html"
终端基础Linux 终端中的目录切换
======
![][0]
> 本篇文章作为终端基础教程系列的一部分,介绍如何在 Linux 命令行中,利用绝对路径和相对路径实现目录切换。
Linux 的 `cd` 命令让你可以轻松切换文件夹(即目录)。只需提供你要切换到的文件夹路径即可。
```
cd path_to_directory
```
然而对于 Linux 新人来说,可能会在路径的指定上有所困扰。
首先,让我们解决这个问题。
### 理解 Linux 中的路径
在 Linux 文件系统中,路径是用来追踪文件位置的信息。所有的路径都从根目录开始,然后向下延伸。
你可以通过下面的方式查看当前所在的位置:
```
pwd
```
结果可能是类似于 `/home/username` 的输出。注意,这里的 `username` 将会是你自己的用户名。
你可以注意到,路径是由 `/` 符号和目录名组成的。比如路径 `/home/abhishek/scripts` 表示 `scripts` 是在文件夹 `abhishek` 之内,而文件夹 `abhishek``home` 文件夹之内。要注意,第一个 '/' 是指根目录(即文件系统的开始处),后面的 '/' 则作为目录的分隔符。
![Path in Linux][1]
> 🖥️ 在终端中键入 `ls /`,然后按回车。你将会看到根目录下的所有内容,试试看!
接下来,让我们学习两种常见的路径指定方式:绝对路径和相对路径。
**绝对路径**:这种路径从根开始,然后一直扩展到你需要的位置。如果一个路径是以 `/` 开头,那就说明它是一个绝对路径。
**相对路径**:这是相对于你文件系统中当前位置的路径。如果我当前位置在 `/home/abhishek`,并且我需要去 `/home/abhishek/Documents` 我只需要简单地切换到 `Documents`,而不需要指定整个绝对路径 `/home/abhishek/Documents`
在我演示这两种路径的区别之前,有必要先熟悉两个特殊的目录标识:
- `.` (单点)表示当前目录。
- `..` (双点)表示上一级目录,也就是当前目录的母目录。
这里有一张图形化的表示。
![Absolute path vs relative path][2]
### 利用 cd 命令变更目录
在你已对路径概念有所了解之后,我们来了解如何切换目录。
> 🖥️ 如果你**仅键入 `cd` 并按回车键**,无论当前位置在哪,系统都会将你带回主目录。试一试吧。
敲入以下命令,你就能看到主目录里的所有文件夹:
```
ls
```
这是我看到的情况:
```
abhishek@ituxedo:~$ ls
Desktop Downloads Pictures Templates VirtualBoxVMs
Documents Music Public Videos
```
你的情况可能与此类似,但未必完全一样。
假如你希望跳转到 `Documents` 文件夹。由于它就在当前目录下,这里使用相对路径会比较方便:
```
cd Documents
```
> 💡 注意,大部分 Linux 发行版预设的终端模拟器会在提示符本身显示出当前所在的位置。因此你不必频繁使用 `pwd` 指令来确认自己的位置。
![Most Linux terminal prompts show the current location][3]
假如你希望切换到位于主目录里的 `Templates` 文件夹。
你可以使用相对路径 `../Templates``..` 会让你返回到上层目录,即 `/home/username`,然后你就可以进入 `Templates` 文件夹了)。
但这次我们尝试使用绝对路径。请把下面的 `abhishek` 替换成你的用户名。
```
cd /home/abhishek/Templates
```
此刻你已经在 `Templates` 文件夹里了。如何前往 `Downloads` 文件夹呢?这次我们再使用相对路径:
```
cd ../Downloads
```
下面的图片会回顾一下你刚才学到的所有或有关目录切换的范例。
![cd command example][4]
> 💡 别忘了你还可以使用终端的 `tab` 键自动补全功能。只需要键入命令或者文件夹名称的前几个字母,然后敲击 `tab` 键,系统就会尝试自动地补全命令或文件夹名称,或者给你显示出所有可能的选项。
### 故障解决
在 Linux 终端操作切换目录的过程中,你可能会遇到一些常见的错误。
#### 文件或目录不存在
如果在你尝试切换目录时,出现类似下面的错误信息:
> bash: cd: directory_name: No such file or directory
那么你可能在路径或目录名称上犯了误解。这里有几点你需要注意的:
- 请确定你输入的目录名中没有拼写错误。
- Linux 系统对大小写敏感,因此,`Downloads` 和 `downloads` 会被识别为不同的目录。
- 你可能未正确指定路径。可能你所在的位置与你预期的不同?或者你遗漏了绝对路径中的开头的 `/` 字符?
![Common examples of "no such file or directory" error][5]
#### 非目录错误
如果你看到像下面这样的错误提示:
> bash: cd: filename: Not a directory
这表示你尝试使用 `cd` 命令对一个文件进行操作,而不是一个目录(文件夹)。很明显,你不能像进入文件夹那样“进入”一个文件,因此会出现这样的错误。
![Not a directory error with the cd command][6]
#### 参数过多
这是 Linux 新手常犯的另一个错误:
> bash: cd: too many arguments
`cd` 命令只接受一个参数。也就是说,你只能对命令指定一个目录。
如果你指定了超过一个的参数,或者在路径中误加了空格,你就会看到这个错误。
![Too many arguments error in Linux terminal][7]
> 🏋🏻 如果你输入 `cd -`,它将会把你带到前一个目录。当你在两个相隔较远的地方切换时非常方便,可以避免再次输入长路径。
### 特殊目录符号
在结束这个教程之前,我想快速告诉你关于特殊符号 `~`。在 Linux 中,`~` 是用户主目录的捷径。
如果用户 `abhi` 运行它,`~` 就会代表 `/home/abhi`,如果用户 `prakash` 运行,`~` 就意味着 `/home/prakash`
总结一下你在这个基础教程系列中学到的所有特殊目录标识:
| 符号 | 描述 |
| :- | :- |
| `.` | 当前目录 |
| `..` | 上级目录 |
| `~` | 主目录 |
| `-` | 前一个目录 |
### 测试你的知识
下面是一些简单的练习,用来测试你刚刚学到的关于路径和 `cd` 命令的知识。
移动到你的主目录,并使用这个命令创建一个嵌套的目录结构:
```
mkdir -p sample/dir1/dir2/dir3
```
然后,一步步来试试这个:
- 使用绝对路径或相对路径进入 `dir3`
- 使用相对路径移动到 `dir1`
- 使用你能想象到的最短路径进入 `dir2`
- 使用绝对路径切换到 `sample` 目录
- 返回你的主目录
> 🔑 想知道你是否全都做对了吗?欢迎分享你的答案。
现在你知道如何切换目录,是不是应该学习一下如何创建它们呢?
我强烈推荐你阅读这篇文章,了解一些关于终端和命令的小技巧。
如果你想了解 Linux 命令行的基础知识,记得关注我们的 Linux 终端基础系列教程的更多章节。
--------------------------------------------------------------------------------
via: https://itsfoss.com/change-directories/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lkxed/
[1]: https://itsfoss.com/content/images/2023/02/path-linux.webp
[2]: https://itsfoss.com/content/images/2023/02/absolute-and-relative-path.png
[3]: https://itsfoss.com/content/images/2023/02/linux-terminal-prompt.png
[4]: https://itsfoss.com/content/images/2023/02/cd-command-example.svg
[5]: https://itsfoss.com/content/images/2023/02/common-errors-with-cd.png
[6]: https://itsfoss.com/content/images/2023/02/not-a-directory-error-linux.png
[7]: https://itsfoss.com/content/images/2023/02/too-many-arguments.png
[8]: https://itsfoss.community/t/exercise-in-changing-directories-in-linux-terminal/10177?ref=its-foss
[0]: https://img.linux.net.cn/data/attachment/album/202310/21/062234mz9zymqc6om5924m.jpg

View File

@ -0,0 +1,319 @@
[#]: subject: "Some notes on using nix"
[#]: via: "https://jvns.ca/blog/2023/02/28/some-notes-on-using-nix/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lkxed"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16332-1.html"
我的一些 nix 学习经验:安装和打包
======
![][0]
最近,我首次尝试了 Mac。直至现在我注意到的最大缺点是其软件包管理比 Linux 差很多。一段时间以来,我对于 homebrew 感到相当不满,因为每次我安装新的软件包时,它大部分时间都花在了升级上。于是,我萌生了试试 [nix][1] 包管理器的想法!
公认的nix 的使用存在一定困惑性(甚至它有自己单独的编程语言!),因此,我一直在努力以最简洁的方式掌握使用 nix避开复杂的配置文件管理和新编程语言学习。以下是我至今为止学习到的内容, 敬请期待如何进行:
- 使用 nix 安装软件包
- 为一个名为 [paperjam][2] 的 C++ 程序构建一个自定义的 nix 包
- 用 nix 安装五年前的 [hugo][3] 版本
如同以往,由于我对 nix 的了解还停留在入门阶段,本篇文章可能存在一些表述不准确的地方。甚至我自己也对于我是否真的喜欢上 nix 感到模棱两可 —— 它的使用真的让人相当困惑!但是,它帮我成功编译了一些以前总是难以编译的软件,并且通常来说,它比 homebrew 的安装速度要快。
### nix 为何引人关注?
通常,人们把 nix 定义为一种“声明式的包管理”。尽管我对此并不太感兴趣,但以下是我对 nix 的两个主要欣赏之处:
- 它提供了二进制包(托管在 [https://cache.nixos.org/][4] 上),你可以迅速下载并安装
- 对于那些没有二进制包的软件nix 使编译它们变得更容易
我认为 nix 之所以擅长于编译软件,主要有以下两个原因:
- 在你的系统中,可以安装同一库或程序的多个版本(例如,你可能有两个不同版本的 libc。举个例子我当前的计算机上就存在两个版本的 node一个位于 `/nix/store/4ykq0lpvmskdlhrvz1j3kwslgc6c7pnv-nodejs-16.17.1`,另一个位于 `/nix/store/5y4bd2r99zhdbir95w5pf51bwfg37bwa-nodejs-18.9.1`
- 除此之外nix 在构建包时是在隔离的环境下进行的,只使用你明确声明的依赖项的特定版本。因此,你无需担心这个包可能依赖于你的系统里的其它你并不了解的包,再也不用与 `LD_LIBRARY_PATH` 战斗了!许多人投入了大量工作,来列出所有包的依赖项。
在本文后面,我将给出两个例子,展示 nix 如何使我在编译软件时遇到了更小的困难。
#### 我是如何开始使用 nix 的
下面是我开始使用 nix 的步骤:
- 安装 nix。我忘记了我当时是如何做到这一点但看起来有一个[官方安装程序][5] 和一个来自 zero-to-nix.com 的 [非官方安装程序][6]。在 MacOS 上使用标准的多用户安装卸载 nix 的 [教程][7] 有点复杂,所以选择一个卸载教程更为简单的安装方法可能值得。
- 把 `~/.nix-profile/bin` 添加到我的 `PATH`
- 用 `nix-env -iA nixpkgs.NAME` 命令安装包
- 就是这样。
基本上,是把 `nix-env -iA` 当作 `brew install` 或者 `apt-get install`
例如,如果我想安装 `fish`,我可以这样做:
```
nix-env -iA nixpkgs.fish
```
这看起来就像是从 [https://cache.nixos.org][8] 下载一些二进制文件 - 非常简单。
有些人使用 nix 来安装他们的 Node 和 Python 和 Ruby 包,但我并没有那样做 —— 我仍然像我以前一样使用 `npm install``pip install`
#### 一些我没有使用的 nix 功能
有一些 nix 功能/工具我并没有使用,但我要提及一下。我最初认为你必须使用这些功能才能使用 nix因为我读过的大部分 nix 教程都讨论了它们。但事实证明,你并不一定要使用它们。
- NixOS一个 Linux 发行版)
- [nix-shell][9]
- [nix flakes][10]
- [home-manager][11]
- [devenv.sh][12]
我不去深入讨论它们,因为我并没真正使用过它们,而且网上已经有很多详解。
### 安装软件包
#### nix 包在哪里定义的?
我认为 nix 包主仓库中的包是定义在 [https://github.com/NixOS/nixpkgs/][13]。
你可以在 [https://search.nixos.org/packages][14] 查找包。似乎有两种官方推荐的查找包的方式:
- `nix-env -qaP NAME`,但这非常缓慢,并且我并没有得到期望的结果
- `nix --extra-experimental-features 'nix-command flakes' search nixpkgs NAME`,这倒是管用,但显得有点儿冗长。并且,无论何种原因,它输出的所有包都以 `legacyPackages` 开头
我找到了一种我更喜欢的从命令行搜索 nix 包的方式:
- 运行 `nix-env -qa '*' > nix-packages.txt` 获取 Nix 仓库中所有包的列表
- 编写一个简洁的 `nix-search` 脚本,仅在 `packages.txt` 中进行 grep 操作(`cat ~/bin/nix-packages.txt | awk '{print $1}' | rg "$1"`
#### 所有的东西都是通过符号链接来安装的
nix 的一个主要设计是,没有一个单一的 `bin` 文件夹来存放所有的包,而是使用了符号链接。有许多层的符号链接。比如,以下就是一些符号链接的例子:
- 我机器上的 `~/.nix-profile` 最终是一个到 `/nix/var/nix/profiles/per-user/bork/profile-111-link/` 的链接
- `~/.nix-profile/bin/fish` 是到 `/nix/store/afkwn6k8p8g97jiqgx9nd26503s35mgi-fish-3.5.1/bin/fish` 的链接
当我安装某样东西的时候,它会创建一个新的 `profile-112-link` 目录并建立新的链接,并且更新我的 `~/.nix-profile` 使其指向那个目录。
我认为,这意味着如果我安装了新版本的 `fish` 但我并不满意,我可以很容易地退回先前的版本,只需运行 `nix-env --rollback`,这样就可以让我回到之前的配置文件目录了。
#### 卸载包并不意味着删除它们
如果我像这样卸载 nix 包,实际上并不会释放任何硬盘空间,而仅仅是移除了符号链接:
```
$ nix-env --uninstall oil
```
我尚不清楚如何彻底删除包 - 我试着运行了如下的垃圾收集命令,这似乎删除了一些项目:
```
$ nix-collect-garbage
...
85 store paths deleted, 74.90 MiB freed
```
然而,我系统上仍然存在 `oil` 包,在 `/nix/store/8pjnk6jr54z77jiq5g2dbx8887dnxbda-oil-0.14.0`
`nix-collect-garbage` 有一个更具攻击性的版本,它也会删除你配置文件的旧版本(这样你就不能回滚了)。
```
$ nix-collect-garbage -d --delete-old
```
尽管如此,上述命令仍无法删除 `/nix/store/8pjnk6jr54z77jiq5g2dbx8887dnxbda-oil-0.14.0`,我不明白原因。
#### 升级过程
你可以通过以下的方式升级 nix 包:
```
nix-channel --update
nix-env --upgrade
```
(这与 `apt-get update && apt-get upgrade` 类似。)
我还没真正尝试升级任何东西。我推测,如果升级过程中出现任何问题,我可以通过以下方式轻松地回滚(因为在 nix 中,所有事物都是不可变的!):
```
nix-env --rollback
```
有人向我推荐了 Ian Henry 的 [这篇文章][15],该文章讨论了 `nix-env --upgrade` 的一些令人困惑的问题 - 也许它并不总是如我们所料?因此,我会对升级保持警惕。
### 下一个目标:创建名为 paperjam 的自定义包
经过几个月使用现有的 nix 包后,我开始考虑制作自定义包,对象是一个名为 [paperjam][2] 的程序,它还没有被打包封装。
实际上,因为我系统上的 `libiconv` 版本不正确,我甚至在没有 nix 的情况下也遇到了编译 `paperjam` 的困难。我认为,尽管我还不懂如何制作 nix 包,但使用 nix 来编译它可能会更为简单。结果证明我的想法是对的!
然而,理清如何实现这个目标的过程相当复杂,因此我在这里写下了一些我实现它的方式和步骤。
#### 构建示例包的步骤
在我着手制作 `paperjam` 自定义包之前,我想先试手构建一个已存在的示例包,以便确保我已经理解了构建包的整个流程。这个任务曾令我头痛不已,但在我在 Discord 提问之后,有人向我阐述了如何从 [https://github.com/NixOS/nixpkgs/][13] 获取一个可执行的包并进行构建。以下是操作步骤:
**步骤 1** 从 GitHub 的 [nixpkgs][13] 下载任意一个包,以 `dash` 包为例:
```
wget https://raw.githubusercontent.com/NixOS/nixpkgs/47993510dcb7713a29591517cb6ce682cc40f0ca/pkgs/shells/dash/default.nix -O dash.nix
```
**步骤 2** 用 `with import <nixpkgs> {};` 替换开头的声明(`{ lib , stdenv , buildPackages , autoreconfHook , pkg-config , fetchurl , fetchpatch , libedit , runCommand , dash }:`)。我不清楚为何需要这样做,但事实证明这么做是有效的。
**步骤 3** 运行 `nix-build dash.nix`
这将开始编译该包。
**步骤 4** 运行 `nix-env -i -f dash.nix`
这会将该包安装到我的 `~/.nix-profile` 目录下。
就这么简单!一旦我完成了这些步骤,我便感觉自己能够逐步修改 `dash` 包,进一步创建属于我自己的包了。
#### 制作自定义包的过程
因为 `paperjam` 依赖于 `libpaper`,而 `libpaper` 还没有打包,所以我首先需要构建 `libpaper` 包。
以下是 `libpaper.nix`,我基本上是从 [nixpkgs][13] 仓库中其他包的源码中复制粘贴得到的。我猜测这里的原理是nix 对如何编译 C 包有一些默认规则,例如 “运行 `make install`”,所以 `make install` 实际上是默认执行的,并且我并不需要明确地去配置它。
```
with import <nixpkgs> {};
stdenv.mkDerivation rec {
pname = "libpaper";
version = "0.1";
src = fetchFromGitHub {
owner = "naota";
repo = "libpaper";
rev = "51ca11ec543f2828672d15e4e77b92619b497ccd";
hash = "sha256-S1pzVQ/ceNsx0vGmzdDWw2TjPVLiRgzR4edFblWsekY=";
};
buildInputs = [ ];
meta = with lib; {
homepage = "https://github.com/naota/libpaper";
description = "libpaper";
platforms = platforms.unix;
license = with licenses; [ bsd3 gpl2 ];
};
}
```
这个脚本基本上告诉 nix 如何从 GitHub 下载源代码。
我通过运行 `nix-build libpaper.nix` 来构建它。
接下来,我需要编译 `paperjam`。我制作的 [nix 包][16] 的链接在这里。除了告诉它从哪里下载源码外,我需要做的主要事情有:
- 添加一些额外的构建依赖项(像 `asciidoc`
- 在安装过程中设置一些环境变量(`installFlags = [ "PREFIX=$(out)" ];`),这样它就会被安装在正确的目录,而不是 `/usr/local/bin`
我首先从散列值为空开始,然后运行 `nix-build` 以获取一个关于散列值不匹配的错误信息。然后我从错误信息中复制出正确的散列值。
我只是在 nixpkgs 仓库中运行 `rg PREFIX` 来找出如何设置 `installFlags` 的 —— 我认为设置 `PREFIX` 应该是很常见的操作,可能之前已经有人做过了,事实证明我的想法是对的。所以我只是从其他包中复制粘贴了那部分代码。
然后我执行了:
```
nix-build paperjam.nix
nix-env -i -f paperjam.nix
```
然后所有的东西都开始工作了,我成功地安装了 `paperjam`!耶!
### 下一个目标:安装一个五年前的 Hugo 版本
当前,我使用的是 2018 年的 Hugo 0.40 版本来构建我的博客。由于我并不需要任何的新功能,因此我并没有感到有升级的必要。对于在 Linux 上操作这个过程非常简单Hugo 的发行版本是静态二进制文件,这意味着我可以直接从 [发布页面][17] 下载五年前的二进制文件并运行。真的很方便!
但在我的 Mac 电脑上我遇到了一些复杂的情况。过去五年中Mac 的硬件已经发生了一些变化,因此我下载的 Mac 版 Hugo 二进制文件并不能运行。同时,我尝试使用 `go build` 从源代码编译,但由于在过去的五年内 Go 的构建规则也有所改变,因此没有成功。
我曾试图通过在 Linux docker 容器中运行 Hugo 来解决这个问题,但我并不太喜欢这个方法:尽管可以工作,但它运行得有些慢,而且我个人感觉这样做有些多余。毕竟,编译一个 Go 程序不应该那么麻烦!
幸好Nix 来救援!接下来,我将介绍我是如何使用 nix 来安装旧版本的 Hugo。
#### 使用 nix 安装 Hugo 0.40 版本
我的目标是安装 Hugo 0.40,并将其添加到我的 PATH 中,以 `hugo-0.40` 作为命名。以下是我实现此目标的步骤。尽管我采取了一种相对特殊的方式进行操作,但是效果不错(可以参考 [搜索和安装旧版本的 Nix 包][18] 来找到可能更常规的方法)。
**步骤 1** 在 nixpkgs 仓库中搜索找到 Hugo 0.40。
我在此链接中找到了相应的 `.nix` 文件 [https://github.com/NixOS/nixpkgs/blob/17b2ef2/pkgs/applications/misc/hugo/default.nix][19]。
**步骤 2** 下载该文件并进行构建。
我下载了带有 `.nix` 扩展名的文件(以及同一目录下的另一个名为 `deps.nix` 的文件),将文件的首行替换为 `with import <nixpkgs> {};`,然后使用 `nix-build hugo.nix` 进行构建。
虽然这个过程几乎无需进行修改就能成功运行,但我仍然做了两处小调整:
- 把 `with stdenv.lib` 替换为 `with lib`
- 为避免与我已安装的其他版本的 `hugo` 冲突,我把包名改为了 `hugo040`
**步骤 3** 将 `hugo` 重命名为 `hugo-0.40`
我编写了一个简短的后安装脚本,用以重命名 Hugo 二进制文件。
```
postInstall = ''
mv $out/bin/hugo $out/bin/hugo-0.40
'';
```
我是通过在 nixpkgs 仓库中运行 `rg 'mv '` 命令,然后复制和修改一条看似相关的代码片段来找到如何实施此步骤。
**步骤 4** 安装。
我通过运行 `nix-env -i -f hugo.nix` 命令,将 Hugo 安装到了 `~/.nix-profile/bin` 目录中。
所有的步骤都顺利运行了!我把最终的 `.nix` 文件存放到了我自己的 [nixpkgs 仓库][20] 中,这样我以后如果需要,就能再次使用它了。
### 可重复的构建过程并非神秘,其实它们极其复杂
我觉得值得一提的是,这个 `hugo.nix` 文件并不是什么魔法——我之所以能在今天轻易地编译 Hugo 0.40,完全归功于许多人长期以来的付出,他们让 Hugo 的这个版本得以以可重复的方式打包。
### 总结
安装 `paperjam` 和这个五年前的 Hugo 版本过程惊人地顺利,实际上比没有 nix 来编译它们更简单。这是因为 nix 极大地方便了我使用正确的 `libiconv` 版本来编译 `paperjam` 包,而且五年前就已经有人辛苦地列出了 Hugo 的确切依赖关系。
我并无计划详细深入地使用 nix真的我很可能对它感到困扰然后最后选择回归使用 homebrew但我们将拭目以待我发现简单入手然后按需逐步掌握更多功能远比一开始就全面接触一堆复杂功能更容易掌握。
我可能不会在 Linux 上使用 nix —— 我一直都对 Debian 基础发行版的 `apt` 和 Arch 基础发行版的 `pacman` 感到满意,它们策略明晰且少有混淆。而在 Mac 上,使用 nix 似乎会有所得。不过,谁知道呢!也许三个月后,我可能会对 nix 感到不满然后再次选择回归使用 homebrew。
*题图MJ/f68aaf37-4a34-4643-b3a1-8728d49cf887*
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2023/02/28/some-notes-on-using-nix/
作者:[Julia Evans][a]
选题:[lkxed][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lkxed/
[1]: https://nixos.org/
[2]: https://mj.ucw.cz/sw/paperjam/
[3]: https://github.com/gohugoio/hugo/
[4]: https://cache.nixos.org/
[5]: https://nixos.org/download
[6]: https://zero-to-nix.com/concepts/nix-installer
[7]: https://nixos.org/manual/nix/stable/installation/installing-binary.html#macos
[8]: https://cache.nixos.org
[9]: https://nixos.org/guides/nix-pills/developing-with-nix-shell.html
[10]: https://nixos.wiki/wiki/Flakes
[11]: https://github.com/nix-community/home-manager
[12]: https://devenv.sh/
[13]: https://github.com/NixOS/nixpkgs/
[14]: https://search.nixos.org/packages
[15]: https://ianthehenry.com/posts/how-to-learn-nix/my-first-package-upgrade/
[16]: https://github.com/jvns/nixpkgs/blob/22b70a48a797538c76b04261b3043165896d8f69/paperjam.nix
[17]: https://github.com/gohugoio/hugo/releases/tag/v0.40
[18]: https://lazamar.github.io/download-specific-package-version-with-nix/
[19]: https://github.com/NixOS/nixpkgs/blob/17b2ef2/pkgs/applications/misc/hugo/default.nix
[20]: https://github.com/jvns/nixpkgs/
[0]: https://img.linux.net.cn/data/attachment/album/202310/30/221702nk4a42dglmcgc7lh.jpg

View File

@ -3,16 +3,20 @@
[#]: author: "Sreenath https://itsfoss.com/author/sreenath/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16310-1.html"
如何使用 VLC 录制屏幕(娱乐)
如何使用 VLC 录制屏幕
======
![][0]
> 用途广泛的 VLC 可以做很多事情。屏幕录制就是其中之一。
VLC 不仅仅是一个视频播放器。它还是一款多功能视频工具,具有普通用户永远无法了解的众多功能。
你可以[使用 VLC 下载 YouTube 视频][1],甚至可以用它修剪视频。
你可以 [使用 VLC 下载 YouTube 视频][1],甚至可以用它修剪视频。
VLC 的另一个不寻常用途是屏幕录制。
@ -20,19 +24,17 @@ VLC 的另一个不寻常用途是屏幕录制。
### 使用 VLC 进行屏幕录制
🚧
> 🚧 虽然我可以使用 VLC 录制桌面屏幕,但无法录制任何声音和鼠标光标。在我看来,它并不能替代合适的屏幕录制工具。
虽然我可以使用 VLC 录制桌面屏幕,但无法录制任何声音和鼠标光标。在我看来,它并不能替代合适的屏幕录制工具。
要使用 [VLC][2] 录制屏幕,请打开它并单击“媒体”,然后选择“转换/保存”。(或者直接点击媒体→打开采集设备)
要使用 [VLC][2] 录制屏幕,请打开它并单击 “<ruby>媒体<rt>Media</rt></ruby>”,然后选择 “<ruby>转换/保存…<rt>Convert/Save...</rt></ruby>”。(或者直接点击 “<ruby>媒体<rt>Media</rt></ruby>”→“<ruby>打开采集设备…<rt>Open Capture Device...</rt></ruby>”)
![Select Convert/ Save option][3]
转到“捕获设备”选项卡,然后从捕获模式下拉列表中选择桌面。
转到 <ruby>捕获设备<rt>Capture Device</rt></ruby> 选项卡,然后从<ruby>捕获模式<rt>Capture Mode</rt></ruby>下拉列表中选择桌面。
![Capture Mode: Desktop][4]
现在这里为你的录制提供了一些帧率。10、24 fps 等都不错,如果你需要更高的质量,请选择更高的。请注意,这会增加文件大小和系统要求。然后,按转换/保存按钮。
现在这里为你的录制提供了一些帧率。10、24 fps 等都不错,如果你需要更高的质量,请选择更高的。请注意,这会增加文件大小和系统要求。然后,按<ruby>转换/保存<rt>Convert/Save</rt></ruby>按钮。
![Set Frame Rate][5]
@ -40,15 +42,15 @@ VLC 的另一个不寻常用途是屏幕录制。
![Set Output Profile][6]
设置你需要的视频格式,然后按保存。
设置你需要的视频格式,然后按<ruby>保存<rt>Save</rt></ruby>
![Edit the Output Profile][7]
现在,你需要给出目标文件名。单击“浏览”按钮,选择位置,然后输入输出文件的名称。单击“保存”。
现在,你需要给出目标文件名。单击 <ruby>浏览<rt>Browse</rt></ruby>”按钮,选择位置,然后输入输出文件的名称。单击 <ruby>保存<rt>Save</rt></ruby>”。
![Output file location and Name][8]
按开始按钮,开始录制屏幕。
<ruby>开始<rt>Start</rt></ruby>按钮,开始录制屏幕。
![Start Recording][9]
@ -66,12 +68,12 @@ VLC 的另一个不寻常用途是屏幕录制。
### 总结
如你所见,虽然可以使用 VLC 录制桌面屏幕,但它并不能替代[专用屏幕录制工具][13]。缺乏录音是一个重大的遗憾。
![][14]
如你所见,虽然可以使用 VLC 录制桌面屏幕,但它并不能替代 [专用屏幕录制工具][13]。缺乏录音是一个重大的遗憾。
仅当你没有任何其他选项时才使用 VLC 进行屏幕录制。你怎么认为?
*题图MJ/f48c22e9-a2d1-4567-a265-6c3aaf147aff*
--------------------------------------------------------------------------------
via: https://itsfoss.com/vlc-record-screen/
@ -79,7 +81,7 @@ via: https://itsfoss.com/vlc-record-screen/
作者:[Sreenath][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -99,3 +101,4 @@ via: https://itsfoss.com/vlc-record-screen/
[12]: https://itsfoss.com/content/images/2023/09/recorded-output.png
[13]: https://itsfoss.com/best-linux-screen-recorders/
[14]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[0]: https://img.linux.net.cn/data/attachment/album/202310/23/153033pej4f9egegjbtbbs.jpg

View File

@ -0,0 +1,85 @@
[#]: subject: "Focusrite Extends Help to Linux Developer to Enable Driver Support"
[#]: via: "https://news.itsfoss.com/focusrite-linux/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16323-1.html"
Focusrite 帮助 Linux 开发人员提供驱动程序支持
======
![][0]
> Focusrite 正在大力支持 Linux 。
尽管 Linux 作为桌面平台正在快速发展,但与 Windows 和 macOS 相比,它仍然是少数音乐制作人的选择。一个主要原因是缺乏一些 DAW 和硬件制造商的官方支持。
幸运的是,随着大牌企业涉足 Linux这一切都在改变。好奇么 嗯,最近,[PreSonus 宣布其 Studio One DAW 支持 Linux][1]。
现在,[PreSonus][3] 的直接竞争对手之一 [Focusrite][2] 也加入了进来🤩,他们也生产音频接口和其他专业音频设备。
如果你是音乐制作人或音乐爱好者,你应该知道这些名字往往是彼此的代名词。虽然我目前使用 PreSonus 而不是 Focusrite但这两家公司及其产品对于大众市场来说都很重要。
### Focusrite 对 Linux 支持的承诺
**Geoffrey Bennett** 是 [LinuxMusicians][5] 论坛的用户,他因致力于为流行的 Focusrite 音频接口Scarlett 第二代和第三代)提供 Linux 支持而闻名。
当然,这是一项艰巨的非官方任务。
为了更进一步,他最近发起了一项筹款活动,以获得 Focusrite 的最新音频接口,即 [Scarlett 第四代][6]。
除此之外,他的目标是购买 Focusrite 的播客设备。
![][7]
筹款活动取得成功的同时,**也得到了 Focusrite 的关注****他们提供了大量资金**来帮助他购买最新的音频接口。
**Focusrite 还表示愿意在产品发布前,向他寄送他没有的设备以及未来的产品。**
最重要的是,他们提到了他们的**工程团队如何帮助他改进开发工作**。
> 虽然我之前一直很难与 Focusrite 的工程师或管理层取得联系,但这次筹款活动获得热烈反响的消息传到了 Focusrite 的高层。考虑到 Linux 音频的小众性质,我一直控制着自己的期望值,但这超出了我的想象。
>
> 我刚刚与他们通了电话,除了提供硬件之外,他们还询问支持开发的其他方式。
>
> Focusrite 不仅主动提出给我寄来我收藏中没有的设备,而且还提议,对于未来的产品发布,他们将尽最大努力提前向我发送设备。这意味着 Linux 支持可以从产品发布之日起就做好准备!
>
> 此外,他们正在讨论他们的工程团队如何更好地帮助我简化开发流程,并消除大部分猜测。
>
> —— Geoffrey 来自 [LinuxMusicians][8]
因此Focusrite 希望确保 Linux 社区能够以更好的体验使用他们的产品。也就是说,间接为 Linux 平台提供官方支持。
这是个大新闻!
这两家大型音频制造商**也应该会说服其他软件和硬件公司**。无论是对驱动程序的官方支持,还是对客户的故障排除支持,任何方式都是可行的。
对 Linux 平台的任何关注都将有助于改善 Linux 音频生态系统。
如果更多的音频公司添加了 Linux 支持,音频专业人士就不必被锁定在专有平台上,也不必支付许可证费用来获得操作系统。
💬我对未来充满希望!你怎么看呢?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/focusrite-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-16254-1.html
[2]: https://focusrite.com/
[3]: https://www.presonus.com/
[5]: https://linuxmusicians.com/
[6]: https://focusrite.com/scarlett/4th-generation
[7]: https://news.itsfoss.com/content/images/2023/10/go-fund-me-focusrite.jpg
[8]: https://linuxmusicians.com/viewtopic.php?t=26173&start=15
[0]: https://img.linux.net.cn/data/attachment/album/202310/27/231035gjnehjvraytiwtbw.jpg

View File

@ -3,20 +3,24 @@
[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16307-1.html"
在 Ubuntu 等非 Nix 操作系统上安装和使用 Nix 包管理器
======
![][0]
> Nix 软件包管理器可以安装在任何 Linux 发行版上。具体方法如下。
[人们喜欢使用不可变的 NixOS][1] 的原因之一是它的 Nix 包管理器。
它有超过 80,000 个软件包,这可能与 Debian 软件包的数量不太接近,但仍然令人印象深刻。
它有超过 80,000 个软件包,这可能与 Debian 软件包的数量相差比较大,但仍然令人印象深刻。
好处是你不必仅仅为了包管理器而[安装 NixOS][2]。与 [Homebrew][3] 和 Rust 的 [Cargo 包管理器][4] 一样,你可以在当前发行版中使用 Nix 包管理器。
好处是你不必仅仅为了包管理器而 [安装 NixOS][2]。与 [Homebrew][3] 和 Rust 的 [Cargo 包管理器][4] 一样,你可以在当前发行版中使用 Nix 包管理器。
为什么要这么做? 因为有时,你可能会发现仅以 Nix 打包格式提供的新应用。这种情况很少见,但有可能。
为什么要这么做?因为有时,你可能会发现仅以 Nix 打包格式提供的新应用。这种情况很少见,但有可能。
在本教程中,我将引导你完成以下内容:
@ -26,43 +30,35 @@
* 更新包
* 删除包
### 在其他 Linux 发行版上安装 Nix 包管理器
Nix 包管理器有两种安装方式:全局安装和本地安装。
📋
> 📋 全局安装意味着系统上的每个可用用户都可以访问 nix 包管理器,而本地安装仅适用于当前用户。[Nix 官方文档][5] 建议你使用全局安装。
全局安装意味着系统上的每个可用用户都可以访问 nix 包管理器,而本地安装仅适用于当前用户。[Nix 官方文档][5] 建议你使用全局安装。
#### 全局安装
##### 对于全局安装:
如果你想全局安装Nix包管理器那么你需要执行以下命令
如果你想全局安装 Nix 包管理器,那么,你需要执行以下命令:
````
sh <(curl -L https://nixos.org/nix/install) --daemon
sh <(curl -L https://nixos.org/nix/install) --daemon
````
执行上述命令后,需要输入 `y` 键并按`回车`键:
执行上述命令后,需要输入 `y` 键并按回车键:
![][6]
完成后,关闭当前终端,因为它不会在当前终端会话上运行。
##### 对于本地安装
#### 本地安装
如果你更喜欢本地安装并且不想每次都使用 sudo则执行以下命令
如果你更喜欢本地安装并且不想每次都使用 `sudo`,则执行以下命令:
````
sh <(curl -L https://nixos.org/nix/install) --no-daemon
sh <(curl -L https://nixos.org/nix/install) --no-daemon
````
输入 `y` 并在要求确认时按`回车`键。
输入 `y` 并在要求确认时按回车键。
完成后,关闭当前终端会话并启动一个新终端会话以使用 Nix 包管理器。
@ -70,7 +66,7 @@ Nix 包管理器有两种安装方式:全局安装和本地安装。
安装 Nix 包管理器后,下一步是搜索包。
首先,[访问 Nix 搜索的官方页面][7]并输入你要安装的软件包的名称。
首先,[访问 Nix 搜索的官方页面][7] 并输入你要安装的软件包的名称。
从给定的描述中,你可以找到所需的软件包,然后选择 `nix-env` 进行永久安装。
@ -78,24 +74,20 @@ Nix 包管理器有两种安装方式:全局安装和本地安装。
![][8]
我上面提到的最后一步(复制命令)什么也不做,只是为你提供了一个用于安装的命令。
我上面提到的最后一步(复制命令)什么也不做,只是为你提供了一个用于安装的命令。
现在,你所要做的就是在终端中执行该命令。
就我而言,它给了我以下命令来安装 Firefox
````
nix-env -iA nixpkgs.firefox
nix-env -iA nixpkgs.firefox
````
完成后,你可以使用以下命令列出已安装的软件包:
````
nix-env -q
nix-env -q
````
![][9]
@ -104,34 +96,28 @@ Nix 包管理器有两种安装方式:全局安装和本地安装。
到目前为止,这是 Nix 包管理器的最佳功能,因为你可以使用/测试包甚至不用安装它!
为此,你可以使用 nix shell它允许你将交互式 shell 与指定的包一起使用,关闭后,你将无法再访问该包。
为此,你可以使用 Nix Shell它允许你将交互式 Shell 与指定的包一起使用,关闭后,你将无法再访问该包。
很酷,对吧?
要使用 nix-shell 访问你喜欢的软件包,请使用以下命令语法:
````
nix-shell -p <package_name>
nix-shell -p <package_name>
````
例如,我想使用一次 neofetch所以我使用了以下命令
例如,我想使用一次 `neofetch`,所以我使用了以下命令:
````
nix-shell -p neofetch
nix-shell -p neofetch
````
![][10]
要退出 shell你所要做的就是执行 `exit` 命令:
要退出 Shell你所要做的就是执行 `exit` 命令:
````
exit
exit
````
### 使用 Nix 包管理器更新包
@ -141,17 +127,13 @@ Nix 包管理器有两种安装方式:全局安装和本地安装。
要更新软件包,首先,你需要使用以下命令更新频道:
````
nix-channel --update
nix-channel --update
````
接下来,你可以通过运行更新命令来列出过时的软件包:
接下来,你可以通过运行更新命令来列出过时的软件包:
````
nix-env --upgrade --dry-run
nix-env --upgrade --dry-run
````
![][11]
@ -161,17 +143,13 @@ Nix 包管理器有两种安装方式:全局安装和本地安装。
要更新单个包,请使用以下命令:
````
nix-env -u <Package_name>
nix-env -u <Package_name>
````
如果你想一次更新所有软件包,请使用以下命令:
````
nix-env -u
nix-env -u
````
### 使用 Nix 包管理器删除包
@ -179,17 +157,13 @@ Nix 包管理器有两种安装方式:全局安装和本地安装。
要删除软件包,你只需按以下方式执行 `nix-env` 命令即可:
````
nix-env --uninstall [package_name]
nix-env --uninstall [package_name]
````
例如,如果我想删除 Firefox 浏览器,那么,我将使用以下命令:
````
nix-env --uninstall firefox
nix-env --uninstall firefox
````
![][12]
@ -202,8 +176,12 @@ Nix 包管理器有两种安装方式:全局安装和本地安装。
我喜欢 NixOS。以至于我写了整个系列这样你就不必阅读文档基础知识
> **[NixOS 系列][1]**
我希望你能像我一样喜欢使用它。
*题图MJ/da586165-eadb-4ed7-9a0b-9c92903344d5*
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-install-nix-package-manager/
@ -211,16 +189,16 @@ via: https://itsfoss.com/ubuntu-install-nix-package-manager/
作者:[Sagar Sharma][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sagar/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/why-use-nixos/
[2]: https://itsfoss.com/install-nixos-vm/
[3]: https://itsfoss.com/homebrew-linux/
[4]: https://itsfoss.com/install-rust-cargo-ubuntu-linux/
[1]: https://linux.cn/article-15606-1.html
[2]: https://linux.cn/article-15624-1.html
[3]: https://linux.cn/article-14065-1.html
[4]: https://linux.cn/article-13938-1.html
[5]: https://nixos.org/learn
[6]: https://itsfoss.com/content/images/2023/10/Install-nix-package-manager-globally.png
[7]: https://search.nixos.org/packages
@ -229,4 +207,5 @@ via: https://itsfoss.com/ubuntu-install-nix-package-manager/
[10]: https://itsfoss.com/content/images/2023/10/Use-packages-wihout-installing-them-using-the-nix-package-manager.png
[11]: https://itsfoss.com/content/images/2023/10/List-outdated-packages-using-the-nix-package-manager.png
[12]: https://itsfoss.com/content/images/2023/10/Remove-packages-using-the-nix-package-manager-1.png
[13]: https://nixos.org/
[13]: https://nixos.org/
[0]: https://img.linux.net.cn/data/attachment/album/202310/22/082116ket5ed87padptmbw.jpg

View File

@ -0,0 +1,113 @@
[#]: subject: "Tor Browser 13.0 Release is an Exciting Upgrade for Privacy-Focused Users"
[#]: via: "https://news.itsfoss.com/tor-browser-13-0-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16303-1.html"
Tor 浏览器 13.0 发布
======
![][0]
> 通过新的 Tor 浏览器更新提升你的私人网络体验!
Tor 浏览器是访问 [Tor 网络][1] 的流行方式之一,它一直是许多想要规避对其施加的限制的人的首选,甚至是那些注重隐私的人的首选。
当然,使用 Tor 浏览器是最简单的 [提高隐私的方法][2]。
虽然 Tor 浏览器基于 Mozilla Firefox但它也进行了一些调整。
在最新的主要更新中,让我们看看发生了什么变化。
### 🆕 Tor 浏览器 13.0:有什么新变化?
![][3]
据开发人员介绍Tor 浏览器 13.0 基于 [Firefox ESR 115][4],并带来了一年的推送到上游的修改。
过渡到较新的 Firefox 版本还**允许他们利用 [Firefox 113][5] 中引入的改进的辅助功能引擎**。
因此,使用屏幕阅读器等**辅助技术**的用户现在可以在使用 Tor 浏览器时获得**比以往更好的性能**。
此版本的亮点包括:
* 改进的 “<ruby>信封打印<rt>Letterboxing</rt></ruby>” 功能
* 更新主页
* 更新徽标
#### 改进的 “<ruby>信封打印<rt>Letterboxing</rt></ruby>” 功能
![][7]
Tor 浏览器 13.0 版本中的 “<ruby>信封打印<rt>Letterboxing</rt></ruby>” 功能得到了重要更新。
LCTT 译注:“<ruby>信封打印<rt>Letterboxing</rt></ruby>” 是一种网络浏览隐私保护技术,它通过为浏览器窗口添加白色填充(看起来像一个信封的周围),来防止网站跟踪你的浏览行为。当你改变浏览器窗口的大小时,“信封打印” 功能会在周边提供白色填充,确保窗口大小始终为特定的大众标准尺寸。这意味着即使你改变窗口大小,那些尝试通过窗口尺寸来跟踪你的网站也无法获取独特的信息。)
开发人员发现 **1000×1000 像素的默认 “信封打印” 尺寸**存在问题,许多现代网站无法正常运行,导致这些网站切换到适用于平板电脑和智能手机的布局。
有些网站甚至会显示桌面网站,但只有横向滚动条。为了解决这个问题,他们调整了**窗口的大小**,最大为 1400×900 像素。
对于最终用户来说,这意味着你无需手动调整窗口大小即可获得合适的尺寸。
他们还补充说:
> 桌面版 Tor 浏览器不应再在大屏幕上触发响应断点,并且我们的绝大多数桌面用户将看到熟悉的横向纵横比,更符合现代浏览器的要求。
>
> 通过计算,我们选择了这一特定尺寸,以便为新窗口提供更大的空间,同时又不会增加过多的桶数量。
#### 更新了主页
![][9]
Tor 浏览器主页的更新已经等了很长时间了。在此版本中,它展示了更新的徽标(更多内容见下文)以及一个新功能 —— **“洋葱化的” DuckDuckGo**,用于访问其 “.onion 站点”。
这也**与我之前提到的改进的辅助功能引擎**齐头并进,从而为屏幕阅读器和其他辅助技术的用户提供更好的支持。
他们还修复了可怕的“**红屏死亡**”错误,该错误在打开新的主页选项卡时偶尔会弹出。
#### 更新徽标
![][10]
从文章的开头你就可能已经注意到 Tor 浏览器的徽标有些不同。
实际上,**Tor 浏览器所有版本的徽标都已更新**,外观更加干净和现代。
这个熟悉的“洋葱标志”已经存在了一段时间,它是由当时的社区民意调查选出的。很高兴看到他们仍在努力改进它。
这些只是该版本的主要亮点,你可以通过 [官方发行说明][11] 了解所有技术修复和其他改进。
### 📥 下载 Tor 浏览器 13.0
此版本的 Tor 浏览器适用于 **Linux**、**Windows**、**Android、** 和 **macOS**。你可以前往[官方网站][12]获取你选择的套餐。
> **[Tor 浏览器 13.0][12]**
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/tor-browser-13-0-release/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Tor_(network)
[2]: https://itsfoss.com/improve-privacy/
[3]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_1.png
[4]: https://www.mozilla.org/en-US/firefox/115.0esr/releasenotes/
[5]: https://www.mozilla.org/en-US/firefox/113.0/releasenotes/
[7]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_2.png
[8]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[9]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_3.png
[10]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_4.png
[11]: https://blog.torproject.org/new-release-tor-browser-130/
[12]: https://www.torproject.org/download/
[0]: https://img.linux.net.cn/data/attachment/album/202310/21/055646urkd0upcknv0znv5.jpg

View File

@ -3,14 +3,17 @@
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16321-1.html"
Warehouse管理 Flatpak 应用的强大工具
======
你是否依赖于 Flatpak 应用Warehouse 应该帮助你简化事情。
![][0]
> 你是否使用 Flatpak 应用Warehouse 可以让你更轻松。
有一个应用,可以为你提供有关 Flatpak 应用的所有重要信息,以及管理它的工具,这不是很好吗?
这一次你很幸运!
@ -25,31 +28,25 @@ Warehouse管理 Flatpak 应用的强大工具
![][2]
开发人员将其称为“**多功能工具箱**”Warehouse 可用于**管理 Flatpak 用户数据****查看 Flatpak 的信息**,甚至**批量管理系统上安装的 Flatpaks**。
开发人员将其称为“**多功能工具箱**”Warehouse 可用于**管理 Flatpak 用户数据****查看 Flatpak 的信息**,甚至**批量管理系统上安装的 Flatpaks**。
它主要使用 **Python 语言**编写,具有以下主要特点:
* **轻松管理用户数据**
* **批量操作功能**
* **清除剩余数据**
![][3]
* 轻松管理用户数据
* 批量操作功能
* 清除残余数据
### 初步印象 👨‍💻
我开始在我的 Ubuntu 系统上测试 Warehouse。从 **Flathub** 上安装它很简单。
我一开始是在我的 Ubuntu 系统上测试 Warehouse。从 **Flathub** 上安装它很简单。
打开后,显示了所有**已安装的 Flatpak 应用**的列表。它们**都以有序的方式排列**。
在我看来,如果们添加了切换到网格布局的选项,会看起来更好。
在我看来,如果们添加了切换到网格布局的选项,会看起来更好。
![][4]
📋
你是否对 Firefox 上面的应用感到好奇?我们最近介绍过它,它是一个名为 “[Mission Center][5]” 的系统监控应用。
> 📋 你是否对列在 Firefox 之上的应用感到好奇?我们最近介绍过它,它是一个名为 “[Mission Center][5]” 的系统监控应用。
接下来,我**查看了 Flatpak 应用的属性**,我点击应用旁边的“信息”标志,打开了应用属性窗口。
@ -67,7 +64,7 @@ Warehouse 还具有**搜索功能**,允许你搜索特定的应用。当你安
![][8]
你还可以**设置过滤器**,以**排序系统完成各种任务所需的应用甚至运行时**。
你还可以**设置过滤器**,以**应用甚至运行时(系统完成各种任务所需的)进行排序**。
点击应用程序左上角的“漏斗”图标以开始筛选。
@ -85,7 +82,7 @@ Warehouse 还具有**搜索功能**,允许你搜索特定的应用。当你安
![][11]
第一个选项“**从文件安装**”允许我使用 “.flatpakref” 文件安装 Flatpak 应用。
第一个选项“<ruby>从文件安装<rt>Install from file</rt></ruby>”允许我使用 `.flatpakref` 文件安装 Flatpak 应用。
![][12]
@ -93,15 +90,15 @@ Warehouse 还具有**搜索功能**,允许你搜索特定的应用。当你安
![][13]
随后,我查看了“**管理剩余数据**”选项。它向我显示了一个旧的 Flatpak 应用留下了多少数据。
随后,我查看了“<ruby>管理残余数据<rt>Manage Leftover Data</rt></ruby>”选项。它向我显示了一个旧的 Flatpak 应用留下了多少数据。
我可以选择通过“安装”选项重新安装应用并恢复数据,也可以使用“垃圾桶”选项彻底清除系统中的任何痕迹。有时,残留数据会占用大量空间。
我可以选择通过“<ruby>安装<rt>Install</rt></ruby>”选项重新安装应用并恢复数据,也可以使用“<ruby>垃圾桶<rt>Trash</rt></ruby>”选项彻底清除系统中的任何痕迹。有时,残留数据会占用大量空间。
![][14]
如果你经常安装/删除应用,那么你的系统中可能会有很多剩余数据。
如果你想管理 Flatpak 仓库,可以前往“**管理远程仓库**”选项,该选项允许你添加或删除它们。
如果你想管理 Flatpak 仓库,可以前往“<ruby>管理远程仓库<rt>Manage Remotes</rt></ruby>”选项,该选项允许你添加或删除它们。
![][15]
@ -115,13 +112,11 @@ Warehouse 还具有**搜索功能**,允许你搜索特定的应用。当你安
你可以前往 [Flathub 商店][17]下载最新版本。
[WarehouseFlathub][17
> **[WarehouseFlathub][17]**
你还可以访问其 [GitHub 仓库][18]查看源代码。
你还可以访问其 [GitHub 仓库][18] 查看源代码。
_你对 Warehouse 应用有何看法请在下面的评论中分享你的想法。_
* * *
你对 Warehouse 应用有何看法?请在下面的评论中分享你的想法。
--------------------------------------------------------------------------------
@ -130,7 +125,7 @@ via: https://news.itsfoss.com/warehouse/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -154,3 +149,4 @@ via: https://news.itsfoss.com/warehouse/
[16]: https://news.itsfoss.com/content/images/2023/10/Warehouse_10.png
[17]: https://flathub.org/apps/io.github.flattool.Warehouse
[18]: https://github.com/flattool/warehouse
[0]: https://img.linux.net.cn/data/attachment/album/202310/26/224157oonzwjd1vp0d2p85.jpg

View File

@ -0,0 +1,95 @@
[#]: subject: "An Interesting Bluetooth App for Linux Just Appeared!"
[#]: via: "https://news.itsfoss.com/bluetooth-app-linux-overskride/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16315-1.html"
Overskride刚刚出现的一款有趣的 Linux 蓝牙应用!
======
![][0]
> 通过这个全新的应用可以实现所有蓝牙功能!
一款适用于 Linux 的新应用已经出现,它可能是满足你所有蓝牙需求的一站式应用。
这款名为 “**Overskride**” 的开源应用首次发布。尽管它还处于开发阶段,但已经提供了一些不错的功能。
请允许我带你看一下。
### Overskride可以期待什么
![][1]
Overskride 将会吸引 Rust 爱好者,因为它**主要是用 Rust 语言**编写的,带有 **GTK4/libadwaita 风格**
根据开发人员的说法,它是一个**简单的蓝牙和 Obex 客户端** _未来计划_,无论使用什么桌面环境或窗口管理器都可以工作。
一些主要功能包括:
* 信任/阻止设备。
* 能够发送/接收文件。
* 设置连接超时时间。
* 支持配置适配器。
查看上面的截图,你可以看到自定义蓝牙设备和连接的所有基本选项,包括适配器名称。
当然,考虑到这是该应用的第一次发布,人们不应该抱有太高的期望。因此,还有改进的空间。
以下是 **Overskride 的一些预览**,以查看它提供的功能。
我在 Ubuntu 22.04 LTS 和 GNOME 42.9 上使用提供的 Flatpak 包进行安装。在此安装上运行似乎没有任何问题。
Overskride 能够检测到我的智能手机,并提供多种配置选项。
![][3]
你可以将设备添加到受信任列表或阻止列表、重命名并发送文件。
我尝试了**文件传输功能**,但在此之前,我必须使用 [Flatseal][4] 允许访问用户文件,以便它可以读取我系统上的文件。
![][5]
我在手机上接受文件传输后,传输开始。速度还可以,文件确实完整地到达那里,没有任何问题。
![][6]
我必须说,在其首次发布时,开发人员为我们提供了一个有用的实用程序。我很高兴看到其未来版本将提供什么样的改进。
一位 Reddit 用户 [询问][7] 是否有任何计划**支持显示无线耳机的电池百分比**。对此,开发人员提到这样做很棘手,因为 每个设备都遵循不同的规范,这使得这一目标更难实现。
### 📥 获得 Overskride
目前Overskride 只能通过 [GitHub 仓库][9] 以 Flatpak 软件包的形式提供。或者,你也可以从源代码开始编译。
> **[Overskride (GitHub)][10]**
我希望开发者在发布稳定版本后将其发布在 [Flathub][11] 上,以便用户可以使用。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/bluetooth-app-linux-overskride/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/content/images/2023/10/Overskride_1.png
[2]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[3]: https://news.itsfoss.com/content/images/2023/10/Overskride_2.png
[4]: https://itsfoss.com/flatseal/
[5]: https://news.itsfoss.com/content/images/2023/10/Overskride_3.png
[6]: https://news.itsfoss.com/content/images/2023/10/Overskride_4.png
[7]: https://www.reddit.com/r/gnome/comments/17a5m99/full_release_of_my_bluetooth_app_d/k5b3ybg/
[9]: https://github.com/kaii-lb/overskride
[10]: https://github.com/kaii-lb/overskride/releases/
[11]: https://flathub.org/en
[0]: https://img.linux.net.cn/data/attachment/album/202310/25/092956c1jb1bqxx8qryrju.jpg

View File

@ -0,0 +1,80 @@
[#]: subject: "Is This The Ultimate Video Streaming App?"
[#]: via: "https://news.itsfoss.com/video-streaming-app/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16318-1.html"
Grayjay可能是一个终极视频流应用程序
======
![][0]
> 一款可提供源代码的应用,让视频流应用不再烦恼。你怎么看?
视频流媒体服务大多包含 DRM并限制下载离线媒体拷贝即使你拥有这些媒体。
每个平台都有自己的一套规则,这些规则对创作者来说可能公平也可能不公平。那么,对于消费者来说呢?要关注多个网络,而且平台可用的订阅选项也很混乱。
如果我们有一个**单一应用让我们访问所有这些网络的视频怎么样?**
**不仅如此:**如果我们可以拥有**离线下载**的福利以及更多功能,例如能够在同一应用中**将视频投射到电视**,而无需使用多个应用,那会怎么样?
而且,这些功能都以**应用的形式存在,其源码可以检查和修改**。听起来令人印象深刻,对吧?
嗯,这就是一个组织 [FUTO][1][**Louis Rossmann**][2] 是其成员之一)提出的。
来认识一下 “**Grayjay**”,你可以在该应用中**跨多个网络关注内容创作者,没有 DRM 和任何不必要的限制**。
### Grayjay专注于创作者
![][3]
Grayjay 是一款媒体应用,旨在让创作者控制他们拥有的视频以及任何获利机会。
该应用处于初始阶段,因此其主要目标尚未反映在该应用中。
目前,该应用是一个**提供源码**的产品,你可以在其中观看来自你喜爱的网络的视频,同时摆脱跟踪器、广告和平台的其他烦恼。
**Louis Rossmann** 是这个项目的中心,我相信这让我们值得去看一下:
> 💡 Louis 是一位颇受欢迎的 YouTuber他发布的视频涉及可维修性、反竞争行为等。他总是说出自己的想法这使我们大多数人都喜欢他。
具体大家可以观看视频,我给大家总结一下,节省大家的时间:
* Grayjay 的目标是成为 [Newpipe][4] 和 [YouTube ReVanced][5] 等应用的**更好替代品**。
* 该应用**提供源码**,它允许你查看源代码,根据你的要求制作你自己的版本(但不能用于商业用途)。
* 该应用**不是免费的**,其商业模式将用户视为客户,而不是产品。然而,如果没有任何 DRM该应用最终将为你提供无限的免费试用期。
* 虽然 Louis 将其称为“开源”以让大多数人理解,但它并不具有你通常期望的标准许可证。他们选择使用非标准许可证,以便能够阻止该应用的恶意重新分发。
你可以在其 [GitLab 页面][6] 及其 [官方网站][7] 上仔细查看它。
> **[Grayjay][7]**
目前仅适用于安卓。考虑到它仍在开发中,你可以决定安装 APK 来尝试一下。
💬 该应用提供了源代码Louis 提到它可以针对个人使用场景重新分发。你对 Grayjay 有何看法? 分享你的意见。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/video-streaming-app/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://futo.org/
[2]: https://www.youtube.com/@rossmanngroup
[3]: https://news.itsfoss.com/content/images/2023/10/gray-jay.jpg
[4]: https://newpipe.net/
[5]: https://revanced.app/
[6]: https://gitlab.futo.org/videostreaming/grayjay
[7]: https://grayjay.app/
[0]: https://img.linux.net.cn/data/attachment/album/202310/26/115233ydzdt8kedtkkt7ti.jpg

View File

@ -0,0 +1,97 @@
[#]: subject: "Geany 2.0 Release Makes it a More Versatile Text Editor and IDE"
[#]: via: "https://news.itsfoss.com/geany-2-0/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16327-1.html"
Geany 2.0 发布使其成为更通用的文本编辑器和 IDE
======
![][0]
> Geany 2.0 带来了新的文件类型和其他改进。
[Geany][1] 被认为是 [Linux 上最好的 Python IDE][2] 之一,它是**一个基于 GTK3 工具包的开源、轻量级 IDE**。
考虑到 Geany 的功能集对各种用户的吸引力,它也可以算作 [Linux 上 Notepad++ 的替代品][3]之一。
现在,新版本已以 “**Geany 2.0**” 的形式推出,提供了许多改进。
让我们看看有什么。
### 🆕 Geany 2.0:有什么新变化?
![][4]
Geany 版本的亮点可以分为两个不同的部分,主要涵盖界面和对文件类型的更好支持。
#### 文件类型升级
通过更新 [基本类型][5],改进了 **Kotlin** 的文件类型配置。同样,对于 **Python** ,针对 Python 3 重写了标准库标签创建脚本,并改进了对 **ctags** 文件格式的支持。
最后Geany 添加了对 [AutoIt][6] 和 [GDScript][7] 等新文件类型的**支持,并更新了 **Nim****PHP** 的文件类型配置,以解决一些长期存在的问题。
#### 界面改进
![][8]
Geany 2.0 在文档列表的侧边栏中提供了**新的树视图**。它是默认启用的,因此你无需执行任何操作。
当你有大量单独的文件需要检查时,它非常实用。你还可以折叠特定文件夹以最大程度地减少混乱。
![][9]
**编译器消息现在使用深色主题友好的颜色**,以便你可以轻松阅读消息。这在深夜编码时应该很有帮助。
此外,**一个新的确认对话框**添加到整个会话的“搜索和替换”功能中,并且**添加了一个选项以在符号树中显示符号**,而无需类别组。
#### 🛠️ 其他变化
除了上述内容之外,还有一些值得注意的变化:
* Geany 现在需要 GTK 3.24。
* 你现在可以滚动文档选项卡。
* 更新了多种语言的翻译。
* 修复了文件类型更改时的关键词着色问题。
* 现在默认启用“更改历史记录”功能。
有关此版本的更多详细信息,你可以浏览[官方发行说明][10]。
### 📥 下载 Geany 2.0
由于它是 **跨平台 IDE**Geany 2.0 可用于 **Linux**、**Windows** 和 **macOS**。你可以前往 [Flathub 商店][11]或其 [官方网站][12]下载你选择的包。
> **[Geany][12]**
如果你对源代码感兴趣,你还可以访问其 [GitHub 仓库][13]。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/geany-2-0/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://www.geany.org/
[2]: https://itsfoss.com/best-python-ides-linux/
[3]: https://itsfoss.com/notepad-alternatives-for-linux/
[4]: https://news.itsfoss.com/content/images/2023/10/Geany_2.0_1.png
[5]: https://kotlinlang.org/docs/basic-types.html
[6]: https://en.wikipedia.org/wiki/AutoIt
[7]: https://docs.godotengine.org/en/stable/tutorials/scripting/gdscript/index.html
[8]: https://news.itsfoss.com/content/images/2023/10/Geany_2.0_2.png
[9]: https://news.itsfoss.com/content/images/2023/10/Geany_2.0_3.png
[10]: https://www.geany.org/documentation/releasenotes/
[11]: https://flathub.org/apps/org.geany.Geany
[12]: https://www.geany.org/download/releases/
[13]: https://github.com/geany/geany
[0]: https://img.linux.net.cn/data/attachment/album/202310/29/171811kqqcqggcg2h517go.jpg

View File

@ -0,0 +1,192 @@
[#]: subject: "Some miscellaneous git facts"
[#]: via: "https://jvns.ca/blog/2023/10/20/some-miscellaneous-git-facts/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "KaguyaQiang"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16316-1.html"
一些被忽略的 Git 知识
======
![][0]
我一直在慢慢地撰写关于 Git 工作原理的文章。尽管我曾认为自己对 Git 非常了解,但像往常一样,当我尝试解释某事的时候,我又学到一些新东西。
现在回想起来,这些事情都不算太令人吃惊,但我以前并没有清楚地思考过它们。
事实是:
* “索引”、“暂存区” 和 `-cached` 是一回事
* 隐匿文件就是一堆提交
* 并非所有引用都是分支或标签
* 合并提交不是空的
下面我们来详细了解这些内容。
### “索引”、“暂存区” 和 `-cached` 是一回事
当你运行 `git add file.txt`,然后运行 `git status`,你会看到类似以下的输出:
```
$ git add content/post/2023-10-20-some-miscellaneous-git-facts.markdown
$ git status
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
new file: content/post/2023-10-20-some-miscellaneous-git-facts.markdown
```
人们通常称这个过程为“暂存文件”或“将文件添加到暂存区”。
当你使用 `git add` 命令来暂存文件时Git 在后台将文件添加到其对象数据库(在 `.git/objects` 目录下),并更新一个名为 `.git/index` 的文件以引用新添加的文件。
Git 中的这个“暂存区”事实上有 3 种不同的名称,但它们都指的是同一个东西(即 `.git/index` 文件):
* `git diff --cached`
* `git diff --staged`
* `.git/index` 文件
我觉得我早该早点认识到这一点,但我之前并没有,所以在这里提醒一下。
### 隐匿文件就是一堆提交
当我运行 `git stash` 命令来保存更改时,我一直对这些更改究竟去了哪里感到有些困惑。事实上,当你运行 `git stash` 命令时Git 会根据你的更改创建一些提交,并用一个名为 `stash` 的引用来标记它们(在 `.git/refs/stash` 目录下)。
让我们将此博客文章隐匿起来,然后查看 `stash` 引用的日志:
```
$ git log stash --oneline
6cb983fe (refs/stash) WIP on main: c6ee55ed wip
2ff2c273 index on main: c6ee55ed wip
... some more stuff
```
现在我们可以查看提交 `2ff2c273` 以查看其包含的内容:
```
$ git show 2ff2c273 --stat
commit 2ff2c273357c94a0087104f776a8dd28ee467769
Author: Julia Evans <julia@jvns.ca>
Date: Fri Oct 20 14:49:20 2023 -0400
index on main: c6ee55ed wip
content/post/2023-10-20-some-miscellaneous-git-facts.markdown | 40 ++++++++++++++++++++++++++++++++++++++++
```
毫不意外,它包含了这篇博客文章。这很合理!
实际上,`git stash` 会创建两个独立的提交:一个是索引提交,另一个是你尚未暂存的改动提交。这让我感到很振奋,因为我一直在开发一款工具,用于快照和恢复 Git 仓库的状态(也许永远不会发布),而我提出的设计与 Git 的隐匿实现非常相似,所以我对自己的选择感到满意。
显然 `stash` 中的旧提交存储在 reflog 中。
### 并非所有引用都是分支或标签
Git 文档中经常泛泛地提到 “引用”,这使得我有时觉得很困惑。就个人而言,我在 Git 中处理 “引用” 的 99% 时间是指分支或 HEAD而剩下的 1% 时间是指标签。事实上,我以前完全不知道任何不是分支、标签或 `HEAD` 的引用示例。
但现在我知道了一个例子—— `stash` 是一种引用,而它既不是分支也不是标签!所以这太酷啦!
以下是我博客的 Git 仓库中的所有引用(除了 `HEAD`
```
$ find .git/refs -type f
.git/refs/heads/main
.git/refs/remotes/origin/HEAD
.git/refs/remotes/origin/main
.git/refs/stash
```
人们在本帖回复中提到的其他一些参考资料:
- `refs/notes/*`,来自 [`git notes`][5]
- `refs/pull/123/head``refs/pull/123/head`` 用于 GitHub 拉取请求(可通过 `git fetch origin refs/pull/123/merge` 获取)
- `refs/bisect/*`,来自 `git bisect`
### 合并提交不是空的
这是一个示例 Git 仓库,其中我创建了两个分支 `x``y`,每个分支都有一个文件(`x.txt` 和 `y.txt`),然后将它们合并。让我们看看合并提交。
```
$ git log --oneline
96a8afb (HEAD -> y) Merge branch 'x' into y
0931e45 y
1d8bd2d (x) x
```
如果我运行 `git show 96a8afb`,合并提交看起来是“空的”:没有差异!
```
git show 96a8afb
commit 96a8afbf776c2cebccf8ec0dba7c6c765ea5d987 (HEAD -> y)
Merge: 0931e45 1d8bd2d
Author: Julia Evans <julia@jvns.ca>
Date: Fri Oct 20 14:07:00 2023 -0400
Merge branch 'x' into y
```
但是,如果我单独比较合并提交与其两个父提交之间的差异,你会发现当然**有**差异:
```
$ git diff 0931e45 96a8afb --stat
x.txt | 1 +
1 file changed, 1 insertion(+)
$ git diff 1d8bd2d 96a8afb --stat
y.txt | 1 +
1 file changed, 1 insertion(+)
```
现在回想起来,合并提交并不是实际上“空的”(它们是仓库当前状态的快照,就像任何其他提交一样),这一点似乎很明显,只是我以前从未思考为什么它们看起来为空。
显然,这些合并差异为空的原因是合并差异只显示**冲突** —— 如果我创建一个带有合并冲突的仓库(一个分支在同一文件中添加了 `x`,而另一个分支添加了 `y`),然后查看我解决冲突的合并提交,它看起来会像这样:
```
$ git show HEAD
commit 3bfe8311afa4da867426c0bf6343420217486594
Merge: 782b3d5 ac7046d
Author: Julia Evans <julia@jvns.ca>
Date: Fri Oct 20 15:29:06 2023 -0400
Merge branch 'x' into y
diff --cc file.txt
index 975fbec,587be6b..b680253
--- a/file.txt
+++ b/file.txt
@@@ -1,1 -1,1 +1,1 @@@
- y
-x
++z
```
这似乎是在告诉我,一个分支添加了 `x`,另一个分支添加了 `y`,合并提交通过将 `z` 替代冲突解决了它。但在前面的示例中,没有冲突,所以 Git 并未显示任何差异。
(感谢 Jordi 告诉我合并差异的工作原理)
### 先这样吧
些写到这里吧,也许我将在学到更多 Git 知识时撰写另一篇关于 Git 的知识的博客文章。
*题图MJ/03bfecc3-944e-47a0-a4fd-575293d2ba92*
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2023/10/20/some-miscellaneous-git-facts/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[KaguyaQiang][c]
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[c]: https://github.com/KaguyaQiang
[1]: tmp.0vBZr0yeF0#the-index-staging-area-and-cached-are-all-the-same-thing
[2]: tmp.0vBZr0yeF0#the-stash-is-a-bunch-of-commits
[3]: tmp.0vBZr0yeF0#not-all-references-are-branches-or-tags
[4]: tmp.0vBZr0yeF0#merge-commits-aren-t-empty
[5]: https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coolest-most-unloved-feature/
[0]: https://img.linux.net.cn/data/attachment/album/202310/25/122259mfu0uowyppuyfdyo.jpg

View File

@ -0,0 +1,242 @@
[#]: subject: "Bitwarden vs. Proton Pass: Comparing Top Open-Source Password Managers"
[#]: via: "https://itsfoss.com/bitwarden-vs-proton-pass/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16306-1.html"
Bitwarden 与 Proton Pass顶级开源密码管理器的比较
======
![][0]
> 你最钟爱哪一个开源密码管理器?
Bitwarden 和 Proton Pass 是两个杰出的开源密码管理器。
Bitwarden 已经成为一个可靠的选择,运行稳定已经超过六年了,而 Proton Pass 则是较新的参与者。
你应该选哪一个呢?是**选择已被大众信任的密码管理器还是选择由以隐私为导向的 Proton 新开发的产品**?
我一直使用 Bitwarden 和带高级特性的 Proton Pass。主要使用的是 Bitwarden但自 Proton Pass 推出后我也在试用它。
在这里,我会分享我对两者的使用体验,以及在选择密码管理器时需要了解的一些注意事项。
### 应用场景和应用程序的可用性
选用密码管理器时,应用程序的可用情况和你的使用场景起着重要角色。
你需要提些问题给自己:
* 我在哪些地方(如桌面、移动设备或网页浏览器)需要密码管理器?
* 它有提供哪些额外功能?
* 是否坚持使用一个服务满足多种功能需求?
我会在文章的后半部分详细阐述这些功能特性。但首先,你需要确定你想在哪里使用密码管理器,以及是否希望单独保持密码管理器服务。
#### Proton Pass
在编写本文时Proton Pass 仅以 **浏览器扩展** 形式存在,同时提供给 **移动平台** 使用。
![][1]
你可以为 **Mozilla Firefox、Google Chrome、Brave、Edge** 及其它基于 Chrome 的浏览器获取扩展。你也可以选择在你的 **安卓或 iOS** 设备上安装它。
![][2]
✅ _如果你不需要桌面应用的密码管理器同时希望继续使用 Proton 提供的全部服务,那么 Proton Pass 是一个适合的选择。_
#### Bitwarden
反之Bitwarden 可以作为 **桌面应用**,支持在 **Windows、 macOS、Linux** 上运行。
此外,你可以为 **Google Chrome、Firefox、Vivaldi, Opera、Edge、Tor 和 DuckDuckGo for Mac** 获取相应的扩展。
![][3]
对于移动平台来说Bitwarden 支持 iPhone、Apple Watch 和安卓手机。你也可以从 F-Droid 安装到安卓设备上。
还不仅限于这些,你可以将 Bitwarden 作为一款 **网页应用**,或通过 **命令行接口** 来使用。
✅ _如果你需要一个能够在多个平台上无缝使用的密码管理器那么 Bitwarden 就是你完美的选择。_
### 用户体验
#### Bitwarden
Bitwarden 的用户体验简洁且低调。
以下是浏览器插件的界面样式:
![][4]
虽然用户界面随着年份的推移有所改进建,但始终忠于其核心理念,也就是 **注重简洁而不是华丽的 UI**
它不会打扰你的操作,始终提供熟悉的用户体验。
![][5]
你可以在预设的 “dark”、“light”、“solarized dark” 和 “nord” 主题间自由切换。
![][6]
无论你正在使用桌面应用、移动应用还是浏览器扩展,它都能给你带来同样的便利。
#### Proton Pass
Proton Pass 在其布局方面 **颇具特色** ,每次你访问该扩展时,它都会给出一个详细的凭证概览。
![][7]
有些用户可能更倾向于 **现代化的 UI 处理方式**,这完全看你的个人喜好。
我个人更倾向于 Bitwarden 的传统设计方式。
### 价格
你可以免费开始使用这两种服务。
Bitwarden 和 Proton Pass 在其免费计划中都不限制存储的登录凭证数量和可使用设备的数量。
如果你需要包括紧急访问、家庭共享、安全存储、二次验证、电子邮箱别名隐藏等特性,你需要升级为高级版。
![][8]
[Bitwarden][9] 的年费仅为 **10 美元**而其家庭计划_包含六个账户_的年费为 **40 美元**。这个价格对大多数人来说是**极其实惠**的。
相比之下,[Proton Pass][10] 的价格较高,其 Plus 计划的年费为 **47.88 欧元**
![][11]
不过,如果你使用了所有其他 Proton 的服务,并选择了 Proton 的无限制订阅,你就可以获得包括 **Proton Pass、Mail、VPN、Drive 和 Calendar** 在内的高级特性。
### 功能
这两个密码管理器功能都十分完善,因此每一种都能提供人们所需要的核心特性。
它们共同拥有的特性包括:
* 密码生成器
* 安全笔记
* 自动填充
* 卡片信息和登录凭证
* 便于随时获取凭证的手机应用
接下来,让我来突出一下根据我个人的使用经验,每一种服务各自独特的优点:
#### Bitwarden
Bitwarden 的一项关键功能是其 “<ruby>发送<rt>Send</rt></ruby>” 功能,它允许你发送一个文件(最大 500 MB或一段文本/便笺给任何人,这都是通过一个安全连接实现的,并且在整个过程中都实现了端到端的加密。这项功能可以在 **桌面应用、扩展插件和网页保险库** 中使用。
![][13]
你可以通过添加一段只有接收者知道的密码短语来保护这个链接。还有更多的自定义选项,例如设定链接的过期时间,或在文件被下载后销毁链接。
我不认为这是一种安全发送文件的方法,更多的是适合发送私人文件(如电子邮件附件)和文本文件。
接下来Bitwarden 提供了一个 **家庭计划**,允许你与 **其他五个账户** 共享一个订阅。Proton Pass 并没有此类服务。
我想着重强调的另一个功能是:**紧急访问**。
因为密码管理器储存了你所有的登录信息,它就是一个包含 **所有你的访问密钥** 的地方。你可以设置紧急访问功能,以方便你信任的朋友或者家庭成员在你不幸出现紧急状况后访问你的密码。
![][14]
当然,在授权用户访问你的账户之前,你可以设定一个期限以确认或拒绝这个访问请求。如果你没有采取任何行动,那么这个访问权限将会授予你的信任用户。
![][15]
> 📋 紧急联系人选项只能在 [网络保险库][16] 中访问。
**值得注意的是:** 这两种服务的密码生成器都包含历史记录,但 Bitwarden **保留历史时间更长**,而 Proton Pass 只保留一天的历史记录。
![][17]
其他功能差异包括:
* 导出为 .CSV 文件
* 调整自动填充行为
* 访问网络保险库
* 桌面应用
* 记录身份信息
对于所有列出的功能,在我使用 Bitwarden 的过程中,我都没有遇到任何重大的问题。
**我唯一注意到的问题**:有时在我的安卓手机上,自动填充功能并没有在键盘应用中作为建议显示。当然,这取决于各个智能手机制造商提供的定制安卓体验,因此不一定是 Bitwarden 特有的问题。
#### Proton Pass
如果我们从“功能数量”上进行比较Bitwarden 会占据优势。
但是作为一个以隐私为重点的工具Proton Pass 实现了密码管理器所有你需要的重要功能,甚至超越了这些。
![][19]
得益于对 [SimpleLogin][20] 的专业研究Proton Pass 支持生成电子邮件别名。
如果你不太了解SimpleLogin 是最受欢迎的 [工具之一,用于保护你的电子邮件地址][21]。
所以,当 Proton Pass 集成了这个功能后,用户可以 **便捷地创建电子邮件别名**,并同时保存登录信息。你在 Proton Pass 上注册的电子邮件将会是实际的电子邮件地址。
我希望他们提供一个设定新的目标电子邮件地址的选项,这将使 Proton Pass 的额外费用更具价值。
此外,如果你是一位使用了部分或所有 Proton 服务的用户,使用 Proton Pass 将会是一种良好的用户体验。因为你无需为其他平台切换或注册。
使用 Proton Pass 还可以获得基本的 **导入/导出,控制某些安全措施,修改密码管理器行为** 的功能。
![][22]
因此,对于 Proton 用户而言Proton Pass 可以是一种一站式的解决方案。
是的我暂时没有注意到任何关于其移动应用的问题Proton Pass 到目前为止很好。
### 你应该选择哪个?
考虑到它们的共性,这主要取决于你个人的使用体验、预算(如果选择付费),以及功能集。
对于我个人的使用,我暂时没有看到 Proton Pass 替换 Bitwarden 的需要。
然而,如果我决定购买 Proton 的无限制订阅,或是更加投入到 Proton 的各项服务中去,我可能会放弃 Bitwarden。
💬 你怎么认为呢?你认为 Proton Pass 值得额外付费吗,或者对于 Proton 的捆绑订阅你有什么看法Bitwarden 是你的最爱吗?欢迎在下方评论分享你的想法。
*题图MJ/a2f5d428-b853-4312-837c-9d66371dd5dc*
--------------------------------------------------------------------------------
via: https://itsfoss.com/bitwarden-vs-proton-pass/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/content/images/2023/10/proton-pass.jpg
[2]: https://itsfoss.com/content/images/2023/10/proton-pass-ui-1.jpg
[3]: https://itsfoss.com/content/images/2023/10/bitwarden-extension-ui.jpg
[4]: https://itsfoss.com/content/images/2023/10/bitwarden-extension-firefox.jpg
[5]: https://itsfoss.com/content/images/2023/10/bitwarden-login.jpg
[6]: https://itsfoss.com/content/images/2023/10/bitwarden-theme.jpg
[7]: https://itsfoss.com/content/images/2023/10/proton-pass-ui-2.jpg
[8]: https://itsfoss.com/content/images/2023/10/bitwarden-pricing.jpg
[9]: https://bitwarden.com/pricing/
[10]: https://account.proton.me/pass/signup
[11]: https://itsfoss.com/content/images/2023/10/protonpass-pricing.jpg
[12]: https://proton.go2cloud.org/favicons/apple-touch-icon.png
[13]: https://itsfoss.com/content/images/2023/10/bitwarden-send-web-vault.jpg
[14]: https://itsfoss.com/content/images/2023/10/emergency-access-bitwarden.jpg
[15]: https://itsfoss.com/content/images/2023/10/bitwarden-emergency-invite.jpg
[16]: https://vault.bitwarden.com/
[17]: https://itsfoss.com/content/images/2023/10/bitwarden-password-generator.jpg
[18]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[19]: https://itsfoss.com/content/images/2023/10/proton-pass-password-generator.jpg
[20]: https://simplelogin.io/
[21]: https://itsfoss.com/protect-email-address/
[22]: https://itsfoss.com/content/images/2023/10/proton-pass-settings.jpg
[0]: https://img.linux.net.cn/data/attachment/album/202310/22/075735dk7c6b4raibhkutq.jpg

View File

@ -0,0 +1,70 @@
[#]: subject: "KDE Plasma 6 Will Not Support Older Desktop Widgets"
[#]: via: "https://news.itsfoss.com/kde-plasma-6-widgets/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16333-1.html"
KDE Plasma 6 将不支持较旧的桌面小部件
======
![][0]
> KDE Plasma 6 进行了一些修改,需要小部件作者进行调整。开发人员,移植时间到了!
KDE Plasma 6 是备受期待的桌面环境版本升级版本。
最近,其发布时间表公布,第一个 Alpha 版本将于 **2023 年 11 月 8 日**上线,最终 **Beta 版本于 2023 年 1 月 31 日**上线,稳定版计划于 **2024 年 2 月 28 日**上线。
考虑到 KDE Plasma 5.x 系列包含多项改进和功能添加,许多用户对 KDE Plasma 6 带来的功能感到期待。
如果你好奇,我们已经介绍了 [KDE Plasma 6 的主要变化][1]。因此,它会发生重大变化也就不足为奇了。
然而,在进行重大修改后,可能会出现一些破坏体验的改动,例如无法在 Plasma 6 上运行任何旧版小部件。
### 给小部件开发者的移植通知
在 KDE 的 _Nate Graham_ 最近发表的一篇 [博客文章][2] 中,向 Plasma 5 小部件作者发出了正式的警告。
其重点指出的信息是:
> **你需要将你的小部件移植到更新的 API以使它们与 Plasma 6 兼容!**
Plasma 小部件 API 已随着即将发布的版本进行了修改。而且,为了适应这种变化并保持运行,小部件开发人员必须将他们的创作移植到更新的 API。
如果开发人员不移植小部件以使用更新的 API它将无法在 KDE Plasma 6 中运行。
当然,流行的小部件开发人员很可能会进行移植。但是,如果你使用的小部件虽然还能使用,但没有积极维护,那么你将不得不在 KDE Plasma 6 中放弃它。
他们已提供了一份 [移植指南][4] 供开发人员遵循。
你可以在 “[Plasma 6 扩展][5]” 下找到与 Plasma 6 兼容的小部件。在撰写本文时,只能看到列出了两个第三方小部件。
![][6]
因此,当你喜欢的小部件被移植到 KDE Plasma 6 并由相关开发者上传后,你就会发现它们的踪迹。
💬 你期待移植哪些 KDE 小部件? 你认为你会错过某些不再维护的小部件吗?在下面的评论中分享你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/kde-plasma-6-widgets/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-15821-1.html
[2]: https://pointieststick.com/2023/10/24/its-time-to-port-your-widgets-to-plasma-6/
[3]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[4]: https://develop.kde.org/docs/plasma/widget/porting_kf6/
[5]: https://store.kde.org/browse?cat=705&ord=latest
[6]: https://news.itsfoss.com/content/images/2023/10/kde-plasma-6-extension.jpg
[0]: https://img.linux.net.cn/data/attachment/album/202310/30/225133j5bvbvb5bzdgg4gn.jpg

View File

@ -0,0 +1,170 @@
[#]: subject: "Moosync: A Feature-Rich Open-Source Music Player"
[#]: via: "https://news.itsfoss.com/moosync/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16329-1.html"
Moosync一款充满特色的开源音乐播放器
==========
![][0]
> Moosync 音乐播放器是一个适用于本地收藏与流媒体音乐的魅力十足的跨平台应用。
想听好音乐吗?🎶
首次亮相的**跨平台开源音乐播放器 Moosync** ,希望成为“欢迎全社区参与”的音乐播放器。
过去我们曾介绍过像 [Harmonoid][1] 和 [MusicPod][2] 这样的应用,但是它们**主要是专注于离线使用**。
与之不同的是Moosync 的独特之处。让我告诉你为什么。
### Moosync概况 ⭐
![][3]
Moosync 是一个主要基于 **Vue****TypeScript** 编程语言开发的音乐播放器。
Moosync **高度可定制**且支持 [YouTube][4]、[Spotify][5] 以及 [LastFM][6] 等多种服务。其关键特性包含:
* 支持显示歌词
* 无广告播放
* 支持本地音乐文件
#### 初次体验 👨‍💻
在我在我的 Ubuntu 系统上安装它之后,我开始了**快速设置向导**。
首先设定我的音乐库的位置,它将会从此处获取本地音乐文件,过程包含了许多步骤。
![][7]
然后,显示了**可以连接的服务选项**。
我**试图连接我的 Spotify 账户**,然而尽管我向 Moosync 提供了需要的信息,**操作无法成功**。不过幸运的是,**这一步并非必要**,所以我选择跳过。
![][8]
随后,它显示了空白的 “<ruby>所有歌曲<rt>All Songs</rt></ruby>” 标签页,由于目前我未连接任何服务或者本地文件,所以这里是空的。
![][9]
考虑我没有本地音乐或 Spotify 音乐,我点击了 YouTube 过滤器并**搜寻我喜欢的曲目**以填充 Moosync。
搜索功能还包括歌曲、艺术家、播放列表和专辑的过滤器。
![][11]
我可以以右键点击单曲,并添加至播放队列或者立即播放。除此之外还有其他选项,会有所不同。
![][12]
我在 Spotify 数据库进行搜索时,尝试做相同操作,但不幸的是,我**必须要登录 Spotify 才能在 Moosync 使用它** 这一计划之前被搁置。
![][13]
> 📋 如果你在寻找更好的 Spotify 支持,我建议你尝试一下 [BlackHole][14] 这个音乐应用。
而我选择继续,去**了解这个音乐播放器**。进入 “<ruby>队列<rt>Queue</rt></ruby>” 标签页或者点击应用程序右下角向上指向的箭头都可以访问它。
整齐的布局使人感到熟悉和舒适。
![][15]
在 “<ruby>所有歌曲<rt>All Songs</rt></ruby>” 标签页,我添加进 Moosync 的所有歌曲都在这,我可以**选择从列表开头播放**,或者**添加到当前队列**,甚至可以在其中**随机播放 100 首歌曲**。
![][16]
在 “<ruby>播放列表<rt>Playlists</rt></ruby>” 标签页,我保存的所有来自 YouTube 的播放清单都在这。它还提供了本地文件和 YouTube 间的排序选项。
![][17]
在 “<ruby>专辑<rt>Albums</rt></ruby>” 标签页,我保存的所有专辑都以一个整齐的网格布局排列。
![][18]
类似的,“<ruby>艺术家<rt>Artists</rt></ruby>” 标签页展示了我添加到 Moosync 库的艺术家。出于某种原因,它没有加载缩略图。
<ruby>类别<rt>Genres</rt></ruby>” 标签页我也跳过了,因为它**似乎不能正常工作**。
![][19]
最后是 “<ruby>探索<rt>Explore</rt></ruby>” 标签页,这里显示了我**听过多少分钟的音乐**。
![][20]
此外,**你可以在 Moosync 中找到许多有用的自定义设置**来优化你的使用体验。
首先在 “<ruby>主题<rt>Theme</rt></ruby>”设置中,你可以在**三个主题**和**两种布局视图**中选择,还有选项**上传或设计自定义主题**。
![][21]
应用还有**许多扩展**,一种是**在 Discord 上开启丰富的存在支持**,另一种是**与 Soundcloud 集成**,此外还有很多。
![][22]
显然,**Moosync 支持键盘快捷方式**,你可以根据个人的喜好进行设置。
![][23]
在我写这篇文章的时候Moosync 这里和那里都有一些小瑕疵,但我认为它作为 [最适合 Linux 的音乐播放器之一][24] 的竞争者非常有竞争力。
### 📥 下载 Moosync
Moosync 适用于 **Linux****Windows** 以及 **macOS**。你可以在其[官方网站][26]下载你需要的安装包。
> **[Moosync][26]**
**对于 Linux 用户**,你也可以在 [**Flathub 商店**][27][**Snap 商店**][28] 和 [**AUR**][29] 下载 Moosync。
你还可以浏览他们的 [GitHub 仓库][30],获取源代码以及更多信息。
💬 对这款应用有何看法?欢迎在评论区告诉我。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/moosync/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/harmonoid/
[2]: https://news.itsfoss.com/musicpod/
[3]: https://news.itsfoss.com/content/images/2023/10/Moosync_1.png
[4]: https://www.youtube.com/
[5]: https://open.spotify.com/
[6]: https://www.last.fm/
[7]: https://news.itsfoss.com/content/images/2023/10/Moosync_2.png
[8]: https://news.itsfoss.com/content/images/2023/10/Moosync_3.png
[9]: https://news.itsfoss.com/content/images/2023/10/Moosync_5.png
[10]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[11]: https://news.itsfoss.com/content/images/2023/10/Moosync_6.png
[12]: https://news.itsfoss.com/content/images/2023/10/Moosync_7.png
[13]: https://news.itsfoss.com/content/images/2023/10/Moosync_8.png
[14]: https://news.itsfoss.com/blackhole-music-app/
[15]: https://news.itsfoss.com/content/images/2023/10/Moosync_9.png
[16]: https://news.itsfoss.com/content/images/2023/10/Moosync_11.png
[17]: https://news.itsfoss.com/content/images/2023/10/Moosync_12-1.png
[18]: https://news.itsfoss.com/content/images/2023/10/Moosync_13.png
[19]: https://news.itsfoss.com/content/images/2023/10/Moosync_13b.png
[20]: https://news.itsfoss.com/content/images/2023/10/Moosync_15.png
[21]: https://news.itsfoss.com/content/images/2023/10/Moosync_18.png
[22]: https://news.itsfoss.com/content/images/2023/10/Moosync_19.png
[23]: https://news.itsfoss.com/content/images/2023/10/Moosync_22.png
[24]: https://itsfoss.com/best-music-players-linux/
[25]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[26]: https://moosync.app/
[27]: https://flathub.org/apps/app.moosync.moosync
[28]: https://snapcraft.io/moosync
[29]: https://aur.archlinux.org/packages/moosync
[30]: https://github.com/Moosync/Moosync
[0]: https://img.linux.net.cn/data/attachment/album/202310/30/062404f6xrcor9t6s1b9lc.jpg

View File

@ -0,0 +1,143 @@
[#]: subject: "Linux Kernel 6.6 Arrives With Numerous Refinements"
[#]: via: "https://news.itsfoss.com/linux-kernel-6-6-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16336-1.html"
Linux 内核 6.6 版本莅临,带来诸多变化
======
![][0]
> 笔记本的支持得到了提升,服务器性能得到了改进,更多内容一一揭晓。
又到了迎接 Linux 内核新版本发布的时刻!
Linux 内核 6.6 的发布,是一次大规模更新,针对各类笔记本、网络硬件、处理器等提供了**大量全方位的改良**。
Linus Torvalds [表示][1]
> 各种各样的修复散布各处,除了针对 r8152 驱动的一些重要修复外,其它的都相当小型。下面是上周的简短日志,供所有想要一探究竟、了解更多细节的读者。日志篇幅短小,可以快速翻阅。
### 🆕 Linux 内核 6.6:都有哪些新元素?
在我们开始之前,需要提醒大家,这一次发布的是**非长期支持版本**,因此不是每个人都必须进行升级,除非你想要享受最新、最棒的版本。
总的来说,让我们来看看这次发布版本的亮点:
* 针对英特尔芯片的优化
* 对笔记本的更佳支持
* 网络改进
* AMD 芯片性能提升
#### 针对英特尔芯片的优化
![][3]
Linux 内核 6.6 版本新增了对英特尔的**神经处理单元NPU**的**支持**,这样的技术原先被称作通用处理器。
这项技术预计将于今年晚些时候,随着英特尔的“[Meteor Lake][4]”芯片亮相而首次公开登场。这些 **NPU 将被专门用于处理人工智能工作负载**
英特尔甚至已经开始对即将发布的 “**Arrow Lake**”芯片进行**NPU 支持的初步工作**了!
此外,还新增了对英特尔的 [Shadow Stack][5] 的支持,因为他们的 [控制流执行技术][6]CET被引入到了内核中。其主要目的是防止现代英特尔 CPU从 Tiger Lake 起)上的**返回导向编程ROP攻击**。
#### 对笔记本的更佳支持
![][8]
**对于惠普笔记本,现在你可以直接在 Linux 中对 BIOS 进行管理了**,这要归功于 “**HP-BIOSCFG**” 驱动的实现。
根据报道,**从 2018 年起出产的惠普笔记本应该都可以支持这个驱动程序**。
**对于联想笔记本,驱动程序已经更新,为更多的 IdeaPad 笔记本**添加了键盘背光控制**。
同样,**对于华硕笔记本**,现在 [ROG Flow X16][9]2023 年款)游戏笔记本,当屏幕翻转时可以**正确地启用平板模式**。
#### 网络改进
![][10]
在网络方面Linux 内核 6.6 版本带来了对如 [Atheros QCA8081][11]、**MediaTek MT7988**、**MediaTek MT7981**[NXP TJA1120 PHY][12] 等新型硬件的**支持**。
同时,各类驱动程序也进行了升级,例如 **高通 Wi-Fi 7 ath12k驱动程序**,它现在**支持 Extremely High ThroughputEHTPHY**。
此外,针对各类 Realtekrtl8xxxuWi-Fi 芯片**启用了 AP 模式**。
关于特定于网络的变动,你可以在这个 [拉取请求][13] 中查阅更多的详细信息。
#### AMD 芯片的性能提升
![][14]
随着 Linux 内核 6.6 的发布AMD 的开发人员推出了**两项尚未正式公开的新技术的支持**。
一项技术是对他们即将推出的 “**FreeSync Panel Replay**” 技术的支持,这一技术只用于游戏笔记本抖动屏,可以自动降低刷新率以节省电力和降低 GPU 工作负载。
另一项技术被称为 “**动态提速控制**”,这是一项能够**提高某些 Ryzen SoC 性能的功能**,但关于它的更多细节比较少。
关于它的实施,你可以在这个 [补丁序列][15] 中查阅更多的信息。
#### 🛠️ 其他的变化与改进
其他方面,还包括一些值得注意的变动:
* 针对 **龙芯** 的大量新特性。
* **Rust 工具链** 升级至 v1.71.1 版本。
* 对 **RISC-V****Btrfs** 的多项改进。
* 完全移除了 [无线 USB][16] 和 [**Ultra-Wideband**][17] 的代码。
* 对 **英伟达**、**英特尔** 和 **AMD** 的**开源图形驱动程序** 的众多优化。
你可以在 [更新日志][18] 中查阅更详细的信息。
### 安装 Linux 内核 6.6 版本
如果你的体验是像 Arch 或 Fedora 那类的滚动发行版,升级过程非常简单。
你只需要稍等片刻,因为这些发行版在发布更新时需要一些时间。
对于其他的用户,你可以等待主要版本更新,或者**根据我们的指南在 Ubuntu 中升级至最新的主线 Linux 内核**
> **[在 Ubuntu 上安装最新的主线 Linux 内核][19]**
你可以在 [官方网站][20] 下载最新的 Linux 内核的 tarball _发布后可能需要些时间才能得到_
💬 你对这次内核发布有何想法?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/linux-kernel-6-6-release/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://lkml.iu.edu/hypermail/linux/kernel/2310.3/06370.html
[2]: https://news.itsfoss.com/content/images/size/w256h256/2022/08/android-chrome-192x192.png
[3]: https://news.itsfoss.com/content/images/2023/10/Linux_Kernel_6.6_1.png
[4]: https://www.intel.com/content/www/us/en/content-details/788851/meteor-lake-architecture-overview.html
[5]: https://en.wikipedia.org/wiki/Shadow_stack
[6]: https://www.intel.com/content/www/us/en/developer/articles/technical/technical-look-control-flow-enforcement-technology.html
[7]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[8]: https://news.itsfoss.com/content/images/2023/10/Linux_Kernel_6.6_2-1.png
[9]: https://rog.asus.com/laptops/rog-flow/rog-flow-x16-2023-series/spec/
[10]: https://news.itsfoss.com/content/images/2023/10/Linux_Kernel_6.6_3.png
[11]: https://www.qualcomm.com/products/internet-of-things/networking/wi-fi-networks/qca8081
[12]: https://www.nxp.com/products/interfaces/ethernet-/automotive-ethernet-phys/tja1120-automotive-ethernet-phy-1000base-t1-asil-b-and-tc-10:TJA1120
[13]: https://lore.kernel.org/lkml/20230829125950.39432-1-pabeni@redhat.com/
[14]: https://news.itsfoss.com/content/images/2023/10/Linux_Kernel_6.6_4.png
[15]: https://lore.kernel.org/lkml/20230420163140.14940-1-mario.limonciello@amd.com/T/#m38ab23d70d213ceb67440168b3f71ad2be3aa564
[16]: https://en.wikipedia.org/wiki/Wireless_USB
[17]: https://en.wikipedia.org/wiki/Ultra-wideband
[18]: https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.6
[19]: https://itsfoss.com/upgrade-linux-kernel-ubuntu/
[20]: https://www.kernel.org/
[0]: https://img.linux.net.cn/data/attachment/album/202310/31/204603jy3e8ezhtyehn9po.jpg

View File

@ -0,0 +1,118 @@
[#]: subject: "Fedora Linux Flatpak cool apps to try for October"
[#]: via: "https://fedoramagazine.org/fedora-linux-flatpak-cool-apps-to-try-for-october/"
[#]: author: "Eduard Lucena https://fedoramagazine.org/author/x3mboy/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16343-1.html"
10 月份在 Fedora Linux 上值得尝试的酷炫 Flatpak应用
======
![][0]
> 本文介绍了 Flathub 中可用的项目以及安装说明。
[Flathub][2] 是获取和分发适用于所有 Linux 的应用的地方。它由 Flatpak 提供支持,允许 Flathub 应用在几乎任何 Linux 发行版上运行。
请阅读 “[Flatpak 入门][3]”。要启用 flathub 作为你的 flatpak 提供商,请使用 [flatpak 站点][4] 上的说明。
### Warehouse
[Warehouse][5] 是一个图形程序,用于管理已安装的 Flatpak 应用程序和 Flatpak 远程应用。一些最重要的功能是:
- 查看 Flatpak 信息
- 管理用户数据
- 批量操作
- 剩余数据管理
- 管理远程应用
你可以通过单击网站上的安装按钮或手动使用以下命令来安装 “Warehouse”
```
flatpak install flathub io.github.flattool.Warehouse
```
### Jogger
[Jogger][6] 是一款适用于 Gnome Mobile 的应用,用于跟踪跑步和其他锻炼。它是用 GTK4、Libadwaita、Rust 和 Sqlite 构建的。尽管它的目标是 Gnome Mobile但它在 Gnome Shell 下运行得很好,而且我发现它对于保存我的统计数据非常有用。其中一些功能是:
- 使用 Geoclue 位置跟踪锻炼
- 从 Fitotrack 导出导入锻炼
- 手动输入锻炼
- 查看锻炼详情
- 编辑锻炼
- 删除锻炼
- 计算锻炼消耗的卡路里
你可以通过单击网站上的安装按钮或使用以下命令手动安装 “Jogger”
```
flatpak install flathub xyz.slothlife.Jogger
```
### Kooha
[Kooha][7] 是一个简单的屏幕录像机,具有简约的界面。你只需单击录制按钮即可,无需配置一堆设置。
Kooha 的主要特点包括:
- 录制麦克风、桌面音频或同时录制两者
- 支持 WebM、MP4、GIF 和 Matroska 格式
- 选择要录制的监视器、窗口或屏幕的一部分
- 多种来源选择
- 可配置的保存位置、指针可见性、帧速率和延迟
- 它在 Wayland 上运行得很好。
```
flatpak install flathub io.github.seadve.Kooha
```
### Warzone 2100
谁不喜欢经典的 Linux 游戏呢?
[Warzone 2100][8] 让你指挥“计划”部队,在人类几乎被核导弹摧毁后重建世界。
![][9]
Warzone 2100 于 1999 年发行,由 Pumpkin Studios 开发,是一款开创性的创新型 3D 即时战略游戏。
2004 年Eidos 与 Pumpkin Studios 合作,以 GNU GPL 条款发布了游戏的源代码。此版本包含除音乐和游戏内视频序列之外的所有内容。当然,这些后来也被发布。
该游戏有一个问题它使用旧的平台包org.kde.Platform 6.4)。这意味着它需要更多的磁盘空间,但乐趣是值得的!
你可以通过单击网站上的安装按钮或使用以下命令手动安装 “Warzone 2100”
```
flatpak install flathub net.wz2100.wz2100
```
Warzone 2100 也可以在 Fedora 仓库中以 rpm 形式使用。
*(题图原始图片由 Daimar Stein 提供)*
---
via: https://fedoramagazine.org/fedora-linux-flatpak-cool-apps-to-try-for-october/
作者:[Eduard Lucena][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/x3mboy/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2023/10/flatpak_for_October-816x345.jpg
[2]: https://flathub.org
[3]: https://fedoramagazine.org/getting-started-flatpak/
[4]: https://flatpak.org/setup/Fedora
[5]: https://flathub.org/apps/io.github.flattool.Warehouse
[6]: https://flathub.org/apps/xyz.slothlife.Jogger
[7]: https://flathub.org/apps/io.github.seadve.Kooha
[8]: https://flathub.org/apps/net.wz2100.wz2100
[9]: https://fedoramagazine.org/wp-content/uploads/2023/10/image-1024x738.png
[0]: https://img.linux.net.cn/data/attachment/album/202311/02/210448a8vbipsdww8qk8bp.jpg

View File

@ -0,0 +1,92 @@
[#]: subject: "How to Install the Latest LibreOffice on Ubuntu"
[#]: via: "https://itsfoss.com/install-libreoffice-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16349-1.html"
如何在 Ubuntu 上安装最新的 LibreOffice
======
![][0]
> 想在 Ubuntu 上使用最新、最好的 LibreOffice这里有一个简单的方法。
LibreOffice 已预装在 Ubuntu 中。
不过,如果你选择了最小化的 Ubuntu 安装,或者卸载它并安装了其他办公套件,你可以使用此命令轻松安装:
````
sudo apt install libreoffice
````
这没问题。但 Ubuntu 仓库提供的 LibreOffice 版本可能不是最新的。
如果你听说有新的 LibreOffice 版本发布,很可能你不会获得该新版本。这是因为 Ubuntu 将其保持在稳定版本上。
这对大多数用户来说都很好。但是,如果你不是“大多数用户”,并且你想在 Ubuntu 中获取最新的 LibreOffice那么你完全可以这样做。
有两种方法可以做到这一点:
* 使用官方 PPA推荐
* 从 LibreOffice 下载 deb 文件
让我们来看看。
### 方法 1通过官方 PPA 安装最新的 LibreOffice推荐
你可以使用官方 “LibreOffice Fresh” PPA 在基于 Ubuntu 的发行版上安装 LibreOffice 的最新稳定版本。
PPA 提供了 LibreOffice 的最新稳定版本,而不是开发版本。因此,这使其成为在 Ubuntu 上获取较新 LibreOffice 版本的理想选择。
你甚至不需要使用此方法卸载以前的版本。它将把现有的 LibreOffice 更新到新版本。
````
sudo add-apt-repository ppa:libreoffice/ppa
sudo apt update
sudo apt install libreoffice
````
由于你要添加 PPA因此你还将获得以这种方式安装的新版本的更新。
### 方法 2从网站获取二进制文件如果需要
你可以随时前往 [LibreOfiice 网站的下载页面][1] 下载最新版本的 deb 文件。你还会看到下载较旧但 LTS 稳定版本的选项。
![][2]
我相信你已经 [知道如何从 deb 文件安装应用][3]。右键单击 deb 文件,选择使用软件中心打开它。进入软件中心后,你可以单击安装按钮进行安装。
### 结论
第二种方法的缺点是,如果有更新,你必须再次下载 deb 文件,删除以前的 LibreOffice 版本,然后使用新下载的 deb 文件安装新版本。
相比之下PPA 会随着系统更新而自动更新。这就是我推荐 PPA 的原因,特别是当它由 LibreOffice 团队自己维护时。
顺便说一句,这里有一些 [充分利用 LibreOffice 的技巧][4]
> **[提高 LibreOffice 生产力的技巧][4]**
我希望这个快速技巧可以帮助你在基于 Ubuntu 的发行版上获取最新的 LibreOffice。如果你有疑问请告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-libreoffice-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.libreoffice.org/download/download-libreoffice
[2]: https://itsfoss.com/content/images/2023/10/download-libreoffice.png
[3]: https://itsfoss.com/install-deb-files-ubuntu/
[4]: https://linux.cn/article-15530-1.html
[5]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[0]: https://img.linux.net.cn/data/attachment/album/202311/04/170549qzsfnygzhtzsn0uo.jpg

View File

@ -0,0 +1,161 @@
[#]: subject: "Cut, Copy and Paste in Vim"
[#]: via: "https://itsfoss.com/vim-cut-copy-paste/"
[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16357-1.html"
如何在 Vim 中剪切、复制和粘贴
======
![][0]
> 在本篇 Vim 快速技巧中,你将学习到剪切和复制粘贴的相关知识。
剪切、复制和粘贴文本是文本编辑中最基本的任务之一,我们都知道 Vim 有不同的处理方式。
这意味着,在你掌握它之前,你会害怕它,一旦你掌握了它,它就只是一个兔子洞。
虽然我将详细介绍剪切、复制和粘贴,但这里是本教程的基本摘要,以帮助你开始使用:
**按键** | **描述**
---|---
`yiw` | 复制当前单词。
`yy` | 复制整行。
`diw` | 剪切当前单词。
`dd` | 剪掉整行。
`p` | 粘贴文本。
别担心Vim 为你提供的选项比我上面提到的要多得多。
在本教程中,我将引导你完成以下内容:
* 如何在 Vim 中复制文本
* 如何在 Vim 中剪切文本
* 如何在 Vim 中粘贴文本
* 如何使用可视模式在 Vim 中剪切和复制文本
那么让我们从第一个开始。
### 如何在 Vim 编辑器中复制文本
虽然我们使用术语“复制”,但 Vim 有一个不同的术语,称为 “<ruby>扽出<rt>yank</rt></ruby>”,因此从现在开始,我将使用“扽出”而不是“复制”。
正如我之前提到的,你可以使用多种方法在 Vim 中扽出文本,以下是一些有用的方法:
命令 | 描述
---|---
`nyy``nY` | 扽出(复制)当前行和接下来的 `n-1` 行。例如,`3yy` 复制当前行及其下面的两行。
`yaw` | 扽出(复制)光标所在的当前单词。
`yy``Y` | 扽出(复制)整个当前行。
`y$` | 扽出(复制)从光标到行尾的文本。
`y^``y0` | 扽出(复制)从光标到行首的文本。
要在 Vim 中扽出,请执行以下 3 个简单步骤:
1. 按 `Esc` 键切换到正常模式
2. 移动到要复制的行或单词
3. 按上表中的相关命令,你的文本将被复制
想学习交互式复制行的方式吗? 跳到本教程的最后一部分。
### 如何在 Vim 编辑器中剪切文本
在 Vim 中,你没有任何删除文本的选项。取而代之的是剪切文本,因此删除和剪切文本与 Vim 中的操作类似。
要在 Vim 中剪切文本,请按 `d` 命令。但你永远不会在没有任何选项的情况下使用 `d` 命令。你总是会添加一些东西来做更多操作。
因此,你可以使用以下一些实用方法使用 `d` 命令剪切文本:
命令 | 描述
---|---
`dd` | 剪切整个当前行。
`d$` | 将文本从光标剪切到行尾。
`d^``d0` | 将文本从光标剪切到行首。
`ndd``dN` | 剪切当前行和接下来的 `n-1` 行。例如,`3dd` 剪切当前行及其下面的两行。
`daw` | 剪切光标所在的当前单词。
假设我想从文件中剪切前 4 行,然后我需要使用 `4dd`,我是这样做的:
![][1]
### 如何在 Vim 编辑器中粘贴文本
在 Vim 中复制或剪切文本后,只需按 `p` 键即可粘贴它。
你可以多次按 `p` 键多次粘贴文本,也可以使用 `np`,其中 `n` 是要粘贴文本的次数。
例如,在这里,我粘贴了之前复制了三遍的行:
![][2]
就是这么简单!
### 如何通过选择文本来剪切和复制文本
如果你使用过 GUI 文本编辑器,那么你肯定习惯于通过选择文本来复制和剪切文本。
让我们从如何通过在 Vim 中选择文本来复制开始。
#### 通过选择文本复制
要在可视模式下复制文本,请执行以下 3 个简单步骤:
1. 移动到要开始选择的地方
2. 按 `Ctrl + v` 启用可视模式
3. 使用箭头键进行选择
4. 按 `y` 键复制所选文本
例如,在这里,我使用可视模式复制了 4 行:
![][3]
如果你注意到,当我按下 `y` 键,它就会显示有多少行被扽出(复制)。就我而言,有 4 行被复制。
#### 在 Vim 中选择文本来剪切文本
要在 Vim 中以可视模式剪切文本,你所要做的就是遵循 4 个简单步骤:
1. 移动到要剪切的位置
2. 按 `Ctrl + v` 切换到可视模式
3. 使用箭头键选择要剪切的行
4. 按 `d` 键剪切选定的行
假设我想剪掉 4 行,那么我会这样做:
![][4]
挺容易。是吧?
### 更多关于 Vim 的内容
你知道 Vim 有多种模式吗? [了解有关 Vim 中不同模式的更多信息][5]。
想提高你的 Vim 水平吗?请参阅 [成为 Vim 专业用户的提示和技巧][7]。
我希望本指南对你有所帮助。
*题图MJ/3555eed2-ab14-433d-920f-17b80b46ce74*
--------------------------------------------------------------------------------
via: https://itsfoss.com/vim-cut-copy-paste/
作者:[Sagar Sharma][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sagar/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/content/images/2023/10/Cut-multiple-lines-in-the-Vim-editor.gif
[2]: https://itsfoss.com/content/images/2023/10/paste-lines-in-Vim-editor.gif
[3]: https://itsfoss.com/content/images/2023/10/Copy-lines-in-vim-by-selecting-them.gif
[4]: https://itsfoss.com/content/images/2023/10/Cut-lines-in-Vim-by-selecting-them.gif
[5]: https://linuxhandbook.com/vim-modes/
[7]: https://linuxhandbook.com/pro-vim-tips/
[0]: https://img.linux.net.cn/data/attachment/album/202311/07/172330q49ttt4ee9r86u39.png

View File

@ -0,0 +1,181 @@
[#]: subject: "Install Docker on Arch Linux"
[#]: via: "https://itsfoss.com/install-docker-arch-linux/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16363-1.html"
在 Arch Linux 上安装 Docker
======
![][0]
> 了解如何在 Arch Linux 上安装 Docker并使用 Docker Compose 和制表符补全为运行容器做好准备。
在 Arch Linux 上安装 Docker 很简单。它可以在 Extra 仓库中找到,你可以简单地 [执行 pacman 魔法][1]
```
sudo pacman -S docker
```
但要在 Arch Linux 上正确运行 Docker还需要执行更多步骤。
### 让 Arch Docker 做好准备
这一切都归结为以下步骤:
* 从 Arch 仓库安装 Docker
* 启动 Docker 守护进程并在每次启动时自动运行
* 将用户添加到 `docker` 组以运行 `docker` 命令而无需 `sudo`
让我们看看详细步骤。
#### 步骤 1安装 Docker 包
打开终端并使用以下命令:
```
sudo pacman -S docker
```
输入密码并在询问时按 `Y`
![][2]
这可能需要一些时间,具体取决于你使用的镜像。
> 💡 如果你看到找不到包或 404 错误,那么你的同步数据库可能是旧的。使用以下命令更新系统(它将下载大量软件包并需要时间): `sudo pacman -Syu`
#### 步骤 2启动 docker 守护进程
Docker 已安装但未运行。你应该在**第一次运行 Docker 命令**之前启动 Docker 守护进程:
```
sudo systemctl start docker.service
```
我还建议启用 Docker 服务,以便 Docker 守护进程在系统启动时自动启动。
```
sudo systemctl enable docker.service
```
这样,你就可以开始运行 `docker` 命令了。你不再需要手动启动 Docker 服务。
![][3]
#### 步骤 3将用户添加到 docker 组
Docker 已安装并且 Docker 服务正在运行。你几乎已准备好运行 `docker` 命令。
但是,默认情况下,你需要将 `sudo``docker` 命令一起使用。这很烦人。
为了避免在每个 `docker` 命令中使用 `sudo`,你可以将自己(或任何其他用户)添加到 `docker` 组,如下所示:
```
sudo usermod -aG docker $USER
```
**你必须注销(或关闭终端)并重新登录才能使上述更改生效。如果你不想这样做,请使用以下命令:**
```
newgrp docker
```
现在已经准备好了。我们来测试一下。
#### 步骤 4验证 docker 安装
Docker 本身提供了一个很小的 Docker 镜像来测试 Docker 安装。运行它并查看是否一切正常:
```
docker run hello-world
```
你应该看到类似这样的输出,表明 Docker 成功运行:
![][4]
恭喜! 你已经在 Arch Linux 上成功安装了 Docker。
### 可选:安装 Docker Compose
Docker Compose 已经成为 Docker 不可或缺的一部分。它允许你管理多个容器应用。
较早的经典 Compose 由 `docker-compose` Python 软件包提供。Docker 还将其移植到 Go 中,并通过 `docker compose` 提供,但该软件包附带 [Docker Desktop][5]。
在这个阶段,我建议使用经典的 `docker-compose` 插件并使用以下命令安装它:
```
sudo pacman -S docker-compose
```
![][6]
### 故障排除技巧
以下是你可能遇到的一些常见问题以及可能的解决方案:
#### 制表符补全不适用于 docker 子命令
如果你想对 `docker` 命令选项使用制表符补全(例如将 `im` 补全到 `images` 等),请安装 `bash-completion` 包:
```
sudo pacman -S bash-completion
```
关闭终端并启动一个新终端。你现在应该能够通过 `docker` 命令使用制表符补全功能。
#### 无法连接到 Docker 守护进程错误
如果你看到以下错误:
```
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
```
那是因为 Docker 守护进程没有运行。参考步骤 2启动 Docker 服务,确保其正在运行并启用它,以便 Docker 守护进程在每次启动时自动运行。
```
sudo systemctl start docker.service
sudo systemctl enable docker.service
```
#### 尝试连接到 Docker 守护程序套接字时权限被拒绝
如果你看到此错误:
```
ddocker: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
```
这是因为你需要使用 `sudo` 运行 `docker` 命令,或者将用户添加到 `docker` 组以在不使用 `sudo` 的情况下运行 `docker` 命令。
我希望这篇简短的文章可以帮助你在 Arch Linux 上运行 Docker。
*题图MJ/9951f8bf-d2e5-4335-bd86-ebf89cba654d*
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-docker-arch-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/pacman-command/
[2]: https://itsfoss.com/content/images/2023/10/installing-docker-arch-linux.png
[3]: https://itsfoss.com/content/images/2023/10/start-docker-daemon-arch-linux.png
[4]: https://itsfoss.com/content/images/2023/10/docker-running-successfully-arch-linux.png
[5]: https://www.docker.com/products/docker-desktop/
[6]: https://itsfoss.com/content/images/2023/10/install-docker-compose.png
[0]: https://img.linux.net.cn/data/attachment/album/202311/09/154128mctmdkdd0jolyv0k.png

View File

@ -0,0 +1,81 @@
[#]: subject: "Skiff Mail Adds a Convenient 'Quick Aliases' Feature"
[#]: via: "https://news.itsfoss.com/skiff-quick-aliases/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16346-1.html"
Skiff Mail 添加了方便的“快速别名”功能
======
![][0]
> 一个用户邮件别名的省时功能。
Skiff Mail 是一款开源的端到端加密电子邮件服务,非常注重隐私。在各方面,包括用户体验方面,它都是 Gmail 和 Proton mail 的不错替代品。
虽然与竞争对手相比,它相当新,但它的一些注重隐私的功能可能会给你留下深刻的印象。
此外,还推出了一项新的快速别名功能。我试用了一下,感觉非常方便。
### 快速别名:一次性无忧设置
一般来说,你可以使用一些 [电子邮件保护工具][1](例如 SimpleLogin或从你的电子邮件提供商无论是谁创建电子邮件别名。
![][2]
你可以选择记住电子邮件别名以供使用,或者在每次注册服务、新闻通讯或向你不认识的人提供联系信息时生成唯一的别名。
换句话说,大多数情况下,需要你进行多次交互才能使用多个电子邮件别名。
在这里,[Skiff Mail][3] 允许你为自己声明一个完整的唯一子域,例如 **gojo.maskmy.id** (正如我在测试用例中所做的那样):
![][4]
接下来,你所要做的就是 在激活它时将其视为你的网站地址,并在其之前添加任何内容作为电子邮件地址,例如 **[xyz@gojo.maskmy.id][5]** 或 **[demo@gojo.maskmy.id][5]**。
如上面的截图所示,你也可以选择生成一个随机名称来声明。
你可以从“<ruby>设置<rt>Settings</rt></ruby>”菜单访问“<ruby>快速别名<rt>Quick Aliases</rt></ruby>”功能:
![][7]
因此,你不再需要生成电子邮件别名,但仍然可以通过这种方式拥有无限的别名。使其成为一次性设置解决方案,可供在线和离线使用。
我认为这些类型的别名应该有几个好处:
* 它使你无需访问该工具即可轻松创建新别名
* 使电子邮件别名看起来比垃圾邮件更真实
根据你的订阅,每人最多可以申请 3 个域名Essential1、Pro2、Business3。并且使用它们创建无限的电子邮件别名。
如果你使用免费套餐,你还可以申请 1 个域名,但最多只能使用 10 个别名。
> **[Skiff Quick 别名][9]**
💬 你对此功能有何看法? 请在下面的评论中告诉我。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/skiff-quick-aliases/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/protect-email-address/
[2]: https://news.itsfoss.com/content/images/2023/10/email-alias-skiff.jpg
[3]: https://skiff.com/mail
[4]: https://news.itsfoss.com/content/images/2023/10/email-alias-skiff-set.jpg
[5]: https://news.itsfoss.com/cdn-cgi/l/email-protection
[6]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[7]: https://news.itsfoss.com/content/images/2023/10/skiff-email-alias-option.png
[9]: https://skiff.com/quick-alias
[0]: https://img.linux.net.cn/data/attachment/album/202311/03/222957cr3rujs9rvxjm9x7.jpg

View File

@ -0,0 +1,99 @@
[#]: subject: "Open Source Definition for AI Models Need a Change"
[#]: via: "https://news.itsfoss.com/open-source-definition-ai/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16344-1.html"
AI 模型的开源定义需要改变
======
![][0]
> 你认为开源许可证应当进行演变吗?
2023 年,我们以**人工智能AI崭露头角**开始了新的一年,同时也见证了众多公司全力以赴投身于 AI。
比如说 **Mozilla**,它在 2023 年初制定了 [开源 AI 计划][1],以开发各种 AI 驱动的解决方案。而 **HuggingChat** 也成为了第一个推出 ChatGPT [开源替代品][2] 的组织。
即便是 Meta他们也不例外。他们自家的 <ruby>大型语言模型<rt>Large Language Model</rt></ruby>LLM[Llama 2][3] 项目在这一年都颇受关注,几个月前他们甚至推出了一款新的 [ChatGPT 竞争对手][4]。
然而,也有很多人开始 [提出质疑][5],主张 **Meta 的 Llama 2 模型并不像人们期望的那样开放**,查看它的开源许可证似乎更是印证了这个观点。
该许可证 **不允许拥有超过 7 亿日活跃用户的服务使用 Llama 2**,同样的,**它也不能被用于训练其他的语言模型**。
这也就意味着 Meta 对于 Llama 2 的许可证 **未能满足** <ruby>开源倡议组织<rt>Open Source Initiative</rt></ruby>OSI<ruby>[开源定义][6]<rt>Open Source Definition</rt></ruby>OSD所列出的 **全部要求**
人们可以争辩,像 [EleutherAI][7] 和 [Falcon 40B][8] 这样的机构就做出了很好的示范,展示了如何适当地处理 AI 的开源许可。
然而Meta 对此的看法却截然不同。
### 开源许可需要进化
在与 [The Verge][10] 的交谈中Meta 人工智能研究副总裁 [Joëlle Pineau][11] 为他们的立场进行了辩解。
她说,我们 **需要在信息共享的益处和可能对 Meta 商业造成的潜在成本之间寻找平衡**
这种对开源的态度让他们的研究人员能够更加专注地处理 AI 项目。她还补充说:
> 开放的方式从内部改变了我们的科研方法,它促使我们不发布任何不安全的东西,并在一开始就负起责任。
Joëlle 希望他们的生成型 AI 模型能够和他们过去的 [PyTorch][12] 项目一样受到热捧。
但是,**问题在于现有的许可证机制**。她又补充说,这些许可证并不是设计来处理那些需要利用大量多源数据的软件。
这反过来**为开发者和用户提供了有限责任**,以及,**对版权侵犯的有限赔偿**(解释为:保护)。
此外,她还指出:
> AI 模型与软件不同,涉及的风险更大,因此我认为我们应该对当前用户许可证进行改变,以更好地适应 AI 模型。
>
> 但我并不是一名律师,所以我在此问题上听从他们的意见。
我赞同她的观点,我们需要更新现有的许可方案,使之更好地适应 AI 模型,以及其他相关事务。
显而易见,**OSI 正在努力进行此事**。OSI 的执行董事 [Stefano Maffulli][13] 向 The Verge 透露,他们了解到 **当前的 OSI 批准的许可证无法满足人工智能模型的需求**
他们正在商讨如何与 AI 开发者合作,以提供一个 “**透明、无许可但安全**” 的模型访问。
他还补充说:
> 我们肯定需要重新思考许可证的方式,以解决 AI 模型中版权和授权的真正限制,同时仍遵循开源社区的一些原则。
无论未来如何,显然,**开源标准必须推动其演化,以适应新的以及即将出现的技术** ,而此类问题不仅仅局限于 AI。
对于未来几年开源许可的变革,我充满期待。
💬 对于你来说呢?你认为对于陈旧的开源标准,我们需要进行什么样的改变?
*题图MJ/e8bae5f6-606b-47db-aaea-c992c0bd143e*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/open-source-definition-ai/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/mozilla-open-source-ai/
[2]: https://news.itsfoss.com/huggingchat-chatgpt/
[3]: https://ai.meta.com/llama/
[4]: https://news.itsfoss.com/meta-open-source-chatgpt/
[5]: https://www.wired.com/story/the-myth-of-open-source-ai/
[6]: https://opensource.org/osd/
[7]: https://www.eleuther.ai/
[8]: https://www.tii.ae/news/uaes-technology-innovation-institute-launches-open-source-falcon-40b-large-language-model
[9]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[10]: https://www.theverge.com/2023/10/30/23935587/meta-generative-ai-models-open-source
[11]: https://en.wikipedia.org/wiki/Jo%C3%ABlle_Pineau
[12]: https://pytorch.org/
[13]: https://twitter.com/smaffulli
[0]: https://img.linux.net.cn/data/attachment/album/202311/02/215953yyz45l5l3v4fzqyv.jpg

View File

@ -0,0 +1,113 @@
[#]: subject: "Garuda Linux “Spizaetus” Release Lets You Try KDE Plasma 6 Now!"
[#]: via: "https://news.itsfoss.com/garuda-linux-spizaetus-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16347-1.html"
Garuda Linux “Spizaetus” 发布,可以体验 KDE Plasma 6 了!
======
![][0]
> Garuda Linux 的最新升级带来了一些引人入胜的变化,以及一个全新的 ISO 版本。
Garuda Linux作为一款 [用户友好且基于 Arch Linux 的发行版][1],由于其可高度定制和可扩展性,近期已经吸引了大批用户。
Garuda Linux 提供了众多选项以满足不同的使用场景,无论是编程还是游戏。
目前Garuda Linux 的最新发布版,**Garuda Linux “Spizaetus”** 现已可用。
下面,让我来引导你了解一下这个版本。
### 🆕 Garuda Linux “Spizaetus”有哪些新变化
![][2]
这个版本的代号“[Spizaetus][3]” 是来源于一种通常在美洲热带地区发现的鹰鹞。此次发布的**主要亮点**包括:
* 提供 Hyprland ISO
* Ugrep 取代了 Grep
* 提供了实验性的 KDE Plasma 6 仓库
#### 提供 Hyprland ISO
![][5]
在这个 Garuda Linux 的版本中,推出了带有 [Hyprland][6] 动态平铺 Wayland 组合器的新 ISO这让**流畅的动画**和**轻松的窗口平铺**成为可能。
在此,开发者之一的 **dr460nf1r3** 表示:
> 关注精美的界面和模糊的窗口Hyprland 无疑是 Garuda 的完美搭配。
> 🚧 然而,不幸的是,他们**不得不停止一些其它变体的更新**,因为这些变体并未得到妥善维护。受影响的变体包括:**MATE**、**LXQt-Kwin**、**KDE-git** 和 **Wayfire**
#### Ugrep 取代了 Grep
[grep][7] 命令行工具已被性能更强的 [ugrep][8] 文件模式搜索器所取代,后者声称自己是“**超快速且用户友好的 Grep 替代品**”。
这是一个令人感兴趣的改变,我们期待看到用户的反馈。
#### 实验性的 KDE Plasma 6 仓库(请谨慎!)
开发者们还引入了一个名为 [chaotic-aur-kde][10] 的实验性仓库,以提供**早期版本的 Plasma 6**。
请注意,这是为那些想提前体验 Plasma 6 设计的用户而设,并**不建议将其用于生产环境**。
一位开发者补充道:
> 我们一直在努力通过特定的 chaotic-aur-kde 仓库,提供 Plasma 6 的早期构建。
>
> 这可以让我们初步探索未来的 Plasma 6 将会是什么样子 - 它的首个版本计划在 2024 年 2 月发布。无需多说,这些都是来自主分支的实验性构建版,因此只适合喜欢接受挑战认的人去体验。
#### 🛠️ 其它的变化和改进
除了上述的亮点之外,还有些其他值得一提的细化调整:
* **garuda-update 工具已更新** 用于处理近期由于包名称变更引发的冲突。
* [Plymouth][11] **已被移除** ,现在在启动时,会显示终端的输出内容。
* 对他们的**基础设施进行了各种更新**,以服务用户。
* 为 Garuda Linux 安装了一个专用的 [Lemmy 实例][12]。
你可以查阅 [官方公告][13] 来获取更多详细信息。
### 📥 下载 Garuda Linux “Spizaetus”
Garuda Linux 提供了 **9 个不同的变体**,包括有 **KDE Plasma**、**GNOME**、**Xfce**、**Cinnamon**、**Sway** 等等选择。
你只需前往 [官方网站][15] ,就可以找到自己所需的镜像下载。
> **[Garuda Linux][15]**
💬 你会尝试这个新发布的版本吗?欢迎在评论中告知我们!
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/garuda-linux-spizaetus-release/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/arch-based-linux-distros/
[2]: https://news.itsfoss.com/content/images/2023/11/Garuda_Linux_Spizaetus.png
[3]: https://en.wikipedia.org/wiki/Spizaetus
[4]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[5]: https://news.itsfoss.com/content/images/2023/11/Garuda_Linux_Spizaetus_Hyprland.png
[6]: https://hyprland.org/
[7]: https://en.wikipedia.org/wiki/Grep
[8]: https://ugrep.com/
[9]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[10]: https://forum.garudalinux.org/t/kde-6-repository-testing/31442
[11]: https://wiki.archlinux.org/title/plymouth
[12]: https://lemmy.garudalinux.org/
[13]: https://forum.garudalinux.org/t/garuda-linux-spizaetus-231029/31843
[14]: https://news.itsfoss.com/content/images/size/w256h256/2022/08/android-chrome-192x192.png
[15]: https://garudalinux.org/downloads
[0]: https://img.linux.net.cn/data/attachment/album/202311/03/225157jllf15lmeqehhhip.jpg

View File

@ -0,0 +1,232 @@
[#]: subject: "Using Asciiquarium for Aquarium Like Animation Effects in Linux Terminal"
[#]: via: "https://itsfoss.com/asciiquarium/"
[#]: author: "Sreenath https://itsfoss.com/author/sreenath/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16354-1.html"
在 Linux 终端利用 Asciiquarium 打造海底世界
======
![][0]
> 这是一个小小的 CLI 工具,可在 Linux 终端中添加水族箱。
[Linux 的众多命令工具][1] 里有一部分偏向于休闲娱乐而非工作。Asciiquarium 就是一个很好的例子。
Asciiquarium 为 Linux 终端提供了以 ASCII 格式构建的简单的水族馆动画效果。
![][2]
看起来有趣吗?我们一起进一步了解。
### 如何在 Linux 中安装 Asciiquarium
如果你是 Arch Linux 或 Fedora 用户,你可以直接从官方仓库中安装。
Fedora 的用户请运行:
```
sudo dnf install asciiquarium
```
而 Arch Linux 用户请运行:
```
sudo pacman -S asciiquarium
```
对于 UbuntuAsciiquarium 没有包含在默认仓库里。因此,你需要选择使用预编译的二进制文件,或者一些外部的 PPA。
#### 使用 PPA 安装 Asciiquarium
首先,添加 Asciiquarium 的 PPA
```
sudo add-apt-repository ppa:ytvwld/asciiquarium
sudo apt update
```
然后,安装相关的软件包和依赖:
```
sudo apt install asciiquarium
```
##### 删除 PPA
在你删除 Asciiquarium 的 PPA 之前,首先要移除相关软件包。
```
sudo apt purge asciiquarium
sudo apt autoremove
```
然后,从系统中移除 PPA
```
sudo add-apt-repository --remove ppa:openshot.developers/ppa
sudo apt update
```
#### 使用二进制文件安装 Asciiquarium
> 🚧 你需要为你的系统单独安装一些 Perl 模块。同时,它将在你的系统中安装几个与 Perl 相关的包,所以请注意。
![安装 Perl 依赖包][3]
要运行二进制文件,你需要从 CPAN 中安装 Animation 和 Curses 模块。
在 Ubuntu 中安装 CPAN
```
sudo apt install cpanminus libcurses-perl
```
接着,运行:
```
cpan Term::Animation
```
![Animation 模块安装][4]
该操作会要求你做一些配置,只需选取默认值即可。全部设置好后,来下载 Asciiquarium 的发布版。
> **[下载 Asciiquarium][5]**
解压文件,你会得到一个名为 Asciiquarium 的文件,接下来,让它具有执行权限。
![赋予 Asciiquarium 执行权限][6]
如果你需要通过命令行来完成,只需打开终端,并用 [chmod 命令][7]赋予执行权限。
```
chmod +x asciiquarium
```
此时,你可以直接在当前目录下运行这个文件以获取动画效果:
```
./asciiquarium
```
或者,你也可以把这个文件放在一个 [包含在你的 PATH 中][8]的位置上。
### 如何使用 Asciiquarium
Asciiquarium 使用起来非常简单,它不设任何命令行选项。只需运行 `asciiquarium`,你就能在终端中看到水族馆的动画效果。
![Asciiquarium 动画效果][2]
程序还提供了几个热键支持。
* `r`:重绘动画
* `p`:暂停/播放动画
* `q`:退出程序
> 📋
此外,也可以使用箭头键提升动画的速度。
#### 用 lolcat 加强 Asciiquarium 的体验
如果你想让 Asciiquarium 的颜色更丰富,可以综合使用 `lolcat`。首先安装 `lolcat`
```
sudo apt install lolcat
```
然后,运行:
```
asciiquarium | lolcat
```
![Asciiquarium Lolcat 效果][9]
如果你还需要更多的动画效果,可以适当调节 `lolcat` 的参数,例如:
```
asciiquarium | lolcat -p 200
```
![Asciiquarium 和 lolcat 的效果调整][10]
这样操作会产生各种不同的颜色效果。
你还可以使用 `lolcat``-i` 选项,来反转颜色:
```
asciiquarium | lolcat -i -p 200
```
![颜色反转效果][11]
### 赠品XFishTank让你的桌面诠释海底世界
还有一个类似的有趣命令叫做 `xfishtank`。它在你的根窗口,即桌面,创建一片海洋世界。你可以从 Ubuntu 的官方仓库直接安装 `xfishtank`
```
sudo apt install xfishtank
```
安装完成后,直接运行:
```
xfishtank
```
XFishTank 提供了很多选项供你调节,例如鱼儿的数量、气泡等等。你可以参考 [该命令的 man 页面][12] 学习更多相关内容。
```
xfishtank -b 100 -f 15
```
![Xfishtank 效果展示][13]
### 结语
就像你所看到的Linux 终端里的小鱼或许不能提供实质性的帮助,但它确实能带给我们愉快的心情。
如果你不是那么喜欢鱼,那么试试看牛吧。
> **[哞~ 我的 Linux 终端里有头牛][14]**
希望你在这些有趣的小工具的陪伴下,能够更加享受 Linux 的世界。:)
*题图MJ/83766cba-02e1-4d20-8797-a38e5c17a0c0*
--------------------------------------------------------------------------------
via: https://itsfoss.com/asciiquarium/
作者:[Sreenath][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sreenath/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/funny-linux-commands/
[2]: https://itsfoss.com/content/images/2023/10/asciiquarium.png
[3]: https://itsfoss.com/content/images/2023/10/Installing-perl-dependencies.png
[4]: https://itsfoss.com/content/images/2023/10/animation-module-setup.png
[5]: https://robobunny.com/projects/asciiquarium/html/
[6]: https://itsfoss.com/content/images/2023/10/execution-permission-for-asciiquarium.png
[7]: https://linuxhandbook.com/chmod-command/
[8]: https://itsfoss.com/add-directory-to-path-linux/
[9]: https://itsfoss.com/content/images/2023/10/ascciiquarium-lolcat.png
[10]: https://itsfoss.com/content/images/2023/10/lolcat-200.gif
[11]: https://itsfoss.com/content/images/2023/10/inverted.png
[12]: https://itsfoss.com/linux-man-page-guide/
[13]: https://itsfoss.com/content/images/2023/10/xfishtank.png
[14]: https://linux.cn/article-15952-1.html
[0]: https://img.linux.net.cn/data/attachment/album/202311/06/104101r2sfkrf27ozfqffq.png

View File

@ -0,0 +1,89 @@
[#]: subject: "An Open Source Web-Based AI-Powered Vector Graphics Editor: Wait, What? 🤯"
[#]: via: "https://news.itsfoss.com/graphite/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16359-1.html"
Graphite由 AI 助力的基于网络的开源矢量图形编辑器
======
![][0]
> 让我们了解更多即将亮相的矢量图形编辑器 Graphite。
此次,我们要介绍的是一个完全免费而开源的 **平面图形编辑器**,名为 “Graphite”它专注于创建一个完善的 **非破坏性所见即所得编辑体验**
许多为 Linux 提供 [出色的矢量图形编辑器][1] 的应用都有一个专门的应用程序,但 Graphite 则选择了不同的路径,成为一个 **仅基于浏览器的应用**。不过,根据路线图,它计划为 Linux、Windows 和 macOS 提供桌面应用。
考虑到它能在网络上运行,它的目标是在提供所有必要功能的同时保持轻量级。
> 📋 目前,此应用处于 alpha 开发阶段,许多计划中的功能还处在概念验证状态。
### Graphite综述 ⭐
![][3]
Graphite 是一个轻量级的、在网络浏览器上运行的 **基于 Rust 的矢量图形编辑器**。由 **Graphene 节点图组合引擎** 驱动,提供了一个易于访问、基于层的优秀编辑器。
其中 **最值得注意的特性** 包括:
* 精美、直观的界面
* 节点图图像效果
* AI 辅助艺术创作
* 实时协作
开发者分享的一段视频展示了几乎所有前述的特性。我被视频中如此直接简洁的编辑体验深深打动。
事实上,我亲自尝试使用它的 [开发版][4] 创作了下面的杰作,**编辑体验类似于你在大多数图形编辑器中所发现的**。
不过,我必须手动寻找各种选项,因为 Graphite **并没有官方文档**
我之前与首席开发者交谈过,他告诉我他们正在努力提供文档,并且已经在期待它的发布了。
同时,值得注意的是,在一个轻量级应用上我们获得众多需要文档的选项,这听起来就很赞!
![][5]
当然,目前的状态下我们还无法真正检验该应用的核心理念。然而,对我来说,如果 Graphite 矢量图形编辑器能实现其展示和承诺的话,那么它的影响将是 **颠覆性的**
我也好奇他们会如何在他们的自由开源应用中添加 AI 辅助艺术创作功能。谁不想试用这个呢?对吧!🤩
与此同时,我建议你关注 [Graphite 的路线图][6] 和 [GitHub 仓库][7],以了解重要的开发里程碑或可以贡献你的智慧。
### 📥 如何获取 Graphite
因为这是一个网络应用,它基本上可以 **在任何支持的网络浏览器上运行**,尽管我不确定智能手机上的编辑体验会怎样。然而,对于台式机、笔记本电脑,甚至是平板电脑,这都可能是个不错的选择。
你可以访问这个 [在线编辑器][8] 来获取最新的 alpha 版本。如果你想获取最新的开发版本,可以试试 [开发版][9]。
> [Graphite][8]
💬 你对 Graphite 矢量图形编辑器有什么看法?你喜欢它的概念吗?你认为他们能实现吗?欢迎分享你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/graphite/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/vector-graphics-editors-linux/#bonus-svg-edit-web-based-alternative-
[2]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[3]: https://news.itsfoss.com/content/images/2023/11/Graphite_1.png
[4]: https://news.itsfoss.com/graphite/dev.graphite.rs
[5]: https://news.itsfoss.com/content/images/2023/08/Graphite_2.jpg
[6]: https://graphite.rs/features/
[7]: https://github.com/GraphiteEditor/Graphite
[8]: https://editor.graphite.rs/
[9]: https://dev.graphite.rs/
[0]: https://img.linux.net.cn/data/attachment/album/202311/07/233225ho5gwy3gypdkn5iz.jpg

View File

@ -0,0 +1,152 @@
[#]: subject: "Fedora 39 Release Debuts With a New Immutable Variant"
[#]: via: "https://news.itsfoss.com/fedora-39-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16360-1.html"
Fedora 39 版本发布,新亮相一款不可变版本
======
![][0]
> Fedora 39 的发布带来了许多令人兴奋的变化和完善。
Fedora 是众多 Ubuntu 替代品中的热门选择,或者还可以说,用其来替换任何其他基于 Debian 的发行版都是一个理想的选择。
每次升级后Fedora 都会变得更好。现在让我们看一下最新的 Fedora 39 有哪些重要更新。
接下来,我将向你展示这个版本都包含了哪些新功能。
### 🆕 Fedora 39 版本更新:都有哪些新内容?
![][1]
得益于稳健强大的 [Linux 内核 6.5][2]Fedora 39 的发布包含了众多值得关注的新功能。其中一些关键的亮点包括:
* 安装程序的升级
* GNOME 45
* 应用/包的更新
* Fedora Onyx
#### 安装程序的升级
![][3]
尽管安装程序并未更换,我们在全新安装 Fedora 时,会看到 **一个新的欢迎屏幕**。在更新后的 Fedora 38 ISO 中,你可能已经注意到了这一点。
此外,**Fedora 的安装程序现在支持更大的 EFI 系统分区**,这个分区的 **最小尺寸达到了 500 MB**。这样做是为了便于在现代硬件上进行固件更新,也为未来引导程序特性做好了准备。
你可能会在想,[Anaconda Web UI][4] 出了什么情况?
其实,这个 UI 设计已经 [计划][5] 在 Fedora 40 中推出了,我们都非常期待未来它的表现!
#### GNOME 45
![][7]
Fedora 39 引入了最近发布的 [GNOME 45][8]。对于此次更新,我是期待已久。
早先的 “<ruby>活动<rt>Activities</rt></ruby>” 按钮已被替代,现在,你将看到一个 **药丸形状的动态指示器**,它可以清晰的显示出当前的工作空间,以及其他可供切换的工作空间。
你可以通过滚动它或者点击圆点来切换工作空间。
<ruby>设置<rt>Settings</rt></ruby>” 应用程序里也陆续出现了一些新的变化,例如像 **经过翻新的 “<ruby>隐私<rt>Privacy</rt></ruby>” 标签页**,以及在 “<ruby>关于<rt>About</rt></ruby>” 菜单下新增的 “<ruby>系统细节<rt>System Details</rt></ruby>” 子菜单。
此外,有一些新的核心应用程序,比如新的图像查看器以及其他应用的更新,我强烈建议你查看我们所有关于 GNOME 45 的内容:
> **[GNOME 45 发布,弃用“活动”按钮][9]**
#### 应用/包的更新情况
![][10]
Fedora 39 同样更新了一套应用程序和软件包。主要的更新包括:
* [LibreOffice 7.6][11]
* [LLVM 17][12]
* [Golang 1.21][13]
* IBus 1.5.29
* Perl 5.38
* RPM 4.19
* Python 3.12
#### Fedora Onyx 的介绍
![][14]
今年早些时候Fedora 的不可变版本阵容迎来了一个新成员,叫做 “[Fedora Onyx][15]”。
那时,**我们并不知道它的具体发布日期**,但现在,它终于在 **Fedora 39 的版本更新中有所呈现**
这使得 Fedora 的不可变版本阵容 **增长到了四个**,其它分别是 [Fedora Silverblue][16]、[Fedora Kinoite][17],以及 [Fedora Sericea][18]。
**推荐阅读** 📖
> **[11 个不可变 Linux 发行版,适合那些想要拥抱未来的人们][19]**
#### 🛠️ 其他变化和改进
其他的变化包括一些注目的内容:
* Fedora 39 提供了一个绿色的 Bash 提示符。
* 移除了 Fedora 工作站的自定义 Qt 主题设计。
* 对于 Indic印度语言实现了谷歌 [Noto][20] 字体。
* Flatpak 不再使用模块进行构建,而是采用了一个独立的构建目标。
你还可以通过我们针对 [Fedora 39 特性][21] 的报道和查阅 [官方更改日志][22] 来更深入地了解这次的版本发布。
### 📥 如何下载 Fedora 39
和往常一样Fedora 的新版本更新带来了大量的改进和升级。你可以转到 [官方网站][23] 获取你所需要的 ISO 文件。
你也可以点击下面的按钮直接下载。
> **[点击获取 Fedora 39][23]**
**对于现有用户** ,我们有一个很实用的升级指南可以帮助你开始升级操作:
> **[如何从 Fedora 38 升级到 Fedora 39][24]**
*题图MJ/f6e0a56a-5e8b-4c0f-89c2-596375bba00f*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/fedora-39-release/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/content/images/2023/10/Fedora_39_1.png
[2]: https://news.itsfoss.com/linux-kernel-6-5-release/
[3]: https://news.itsfoss.com/content/images/2023/10/Fedora_39_2.png
[4]: https://news.itsfoss.com/fedora-new-web-ui-install-dev/
[5]: https://fedoraproject.org/wiki/Changes/AnacondaWebUIforFedoraWorkstation
[6]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[7]: https://news.itsfoss.com/content/images/2023/10/Fedora_39_3.png
[8]: https://news.itsfoss.com/gnome-45-release/
[9]: https://linux.cn/article-16215-1.html
[10]: https://news.itsfoss.com/content/images/2023/10/Fedora_39_4.png
[11]: https://news.itsfoss.com/libreoffice-7-6/
[12]: https://releases.llvm.org/17.0.1/docs/ReleaseNotes.html
[13]: https://go.dev/blog/go1.21
[14]: https://news.itsfoss.com/content/images/2023/10/Fedora_39_5.png
[15]: https://news.itsfoss.com/fedora-onyx-official/
[16]: https://silverblue.fedoraproject.org/
[17]: https://kinoite.fedoraproject.org/
[18]: https://fedoraproject.org/sericea/
[19]: https://linux.cn/article-15841-1.html
[20]: https://fonts.google.com/noto
[21]: https://linux.cn/article-16207-1.html
[22]: https://fedoraproject.org/wiki/Releases/39/ChangeSet
[23]: https://fedoraproject.org/workstation/download/
[24]: https://itsfoss.com/upgrade-fedora-version/
[0]: https://img.linux.net.cn/data/attachment/album/202311/08/160126z6pyyg7t7iiagzzz.png

View File

@ -0,0 +1,113 @@
[#]: subject: "Whats new in Fedora Workstation 39"
[#]: via: "https://fedoramagazine.org/whats-new-fedora-workstation-39/"
[#]: author: "Merlin Cooper https://fedoramagazine.org/author/mxanthropocene/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "ChatGPT"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-16362-1.html"
Fedora Workstation 39 的新特性
======
![][0]
作为领先的开源桌面操作系统Fedora Workstation 由全球社区共同携手打造。本文将向你展示最新版本 Fedora Workstation 39 主要的用户所能见到的变化。如今,你可以在 [Fedora Workstation 官方网页][3] 下载体验,或是通过在“软件”应用内或使用 [dnf system-upgrade][4] 在你喜爱的终端模拟器进行升级。
### GNOME 45
Fedora Workstation 39 搭载了 GNOME 的最新版本——GNOME 45。这个版本为几个核心应用带来了时尚的新插件、全新的图像查看器应用、新增了在特定系统上的键盘背光设置等。同时我们的一个更具信息性的活动按钮及多项性能改进将伴随全面优化的整套用户体验而来。你可以在 [GNOME 45 发布说明][5] 查看更多详细信息。
#### 内在改进
GNOME 45 在用户体验的方方面面都有所细化,以下几点只是其中的一部分:
* 使用动态工作空间指示器取代了静态的活动按钮。新的指示器不仅显示工作空间数量,而且还能快速展示你正在关注的工作空间。
![新的动态工作空间指示器][6]
* 新增了一个摄像头活动指示器,要启用它,你可以通过 Pipewire 访问你的摄像头。这将与麦克风、屏幕录制和屏幕录像指示器协同工作。
* 快速设置菜单新增了一个键盘背光设置,这项设置仅支持特定的硬件设备。
![新的键盘背光设置][7]
* 重新设计了默认的光标,并修复了原来设置中很多长期存在的问题。
* Fedora Workstation 39 已不再支持 Adwaita-qt 与 QGnomePlatform Qt 主题,取而代之的是 Qt 应用的上游默认主题。
#### 核心应用
在 GNOME 45许多应用现在使用了 libadwaita 1.4 特色的新用户界面插件。这提供了美观的双色设计并将侧边栏延伸至窗口全高。这不仅使应用看起来更酷,而且在小窗口尺寸下更易于使用。可在 [此处][8] 查看详细信息。此外,新的 headerbar 插件使顶栏与窗口内容的视觉分隔更加明显。
![文件、设置和日历显示新的侧边栏插件][9]
Fedora Workstation 39 引入了 GNOME 的新图像查看器应用,内部代号为 Loupe。该应用自底而上使用 Rust、GTK 4和 libadwaita 进行构建以实现高性能和高度适应性。
![新的图像查看器应用(截图素材来自公有领域)][10]
在核心应用中我们还有很多细微的改进。例如:
* “设置”应用新增了“系统详情”区块,新的键盘布局查看器,简洁的描述字段,以及优化的键盘导航功能。
![新的系统详情窗口][11]
* 在“文件”应用中,我们对搜索结果进行了顺序优化。
* 我们在“软件”应用中提供了卸载 Flatpak 时删除用户数据的选项。
![卸载 Flatpak 应用时显示的新提示][12]
* “日历”应用新增了逐行滚动和更有用的搜索结果功能。
* 在“连接”应用中使用 RDP 连接时新增了复制文件、图像、文本的功能。
#### 性能改进
在 GNOME 45我们对性能提升投入了很多努力。
* 现在默认优先使用硬件加速的视频解码(在支持的环境下)。
* “文件”应用中的缩略图生成现在支持多线程处理。
* 我们显著减少了光标的卡顿和延迟。
* 在 GNOME Shell 中,以及“文件”、“软件”、“字符”等应用的搜索性能得到显著提升。
我们在整个技术栈中也进行了一些性能改进,包括 GLib、GTK 的 OpenGL 渲染器,和 systemd。这些性能优化工作都得益于我们在 Fedora Workstation 之前的版本中启用的帧指针!
### Fedora Linux 39 的底层突破
Fedora Linux 39 也有许多值得注意的底层变化,此处列出一些:
* Fedora Linux 39 现在将彩色 Bash 提示符作为默认设置!
![彩色 Bash 提示符][13]
* 对于使用 Indic 脚本的语系,我们现在把 Noto 字体作为默认字体,取代了旧的 Lohit 字体集。
* 由于使用量较低且缺乏主动维护,[模块化][14] 仓库在 Fedora Linux 39 中不再被提供。
* Fedora 模块构建服务将在 Fedora Linux 38 的生命周期结束后,也就是在 2024 年的 5 月左右结束。
*题图MJ/f06d0e9b-4c2e-4947-8333-e51340a45324*
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/whats-new-fedora-workstation-39/
作者:[Merlin Cooper][a]
选题:[lujun9972][b]
译者:[ChatGPT](https://linux.cn/lctt/ChatGPT)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mxanthropocene/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2023/10/f39workstation-816x345.jpg
[2]: https://fedoraproject.org/wiki/User:Jimmac
[3]: https://fedoraproject.org/workstation/
[4]: https://docs.fedoraproject.org/en-US/quick-docs/upgrading-fedora-offline/
[5]: https://release.gnome.org/45/
[6]: https://fedoramagazine.org/wp-content/uploads/2023/10/Activites5-1.gif
[7]: https://fedoramagazine.org/wp-content/uploads/2023/10/Keyboard.png
[8]: https://blogs.gnome.org/alicem/2023/09/15/libadwaita-1-4/
[9]: https://fedoramagazine.org/wp-content/uploads/2023/10/New-sidebars-1024x545.png
[10]: https://fedoramagazine.org/wp-content/uploads/2023/10/Screenshot-from-2023-10-20-02-54-42.png
[11]: https://fedoramagazine.org/wp-content/uploads/2023/10/Screenshot-from-2023-10-20-02-58-49.png
[12]: https://fedoramagazine.org/wp-content/uploads/2023/10/image-1.png
[13]: https://fedoramagazine.org/wp-content/uploads/2023/10/Screenshot-from-2023-10-20-03-14-17.png
[14]: https://docs.fedoraproject.org/en-US/modularity/
[0]: https://img.linux.net.cn/data/attachment/album/202311/09/143205c3c7636ooh0zuhuu.png

View File

@ -1,87 +0,0 @@
[#]: subject: "Focusrite Extends Help to Linux Developer to Enable Driver Support"
[#]: via: "https://news.itsfoss.com/focusrite-linux/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Focusrite Extends Help to Linux Developer to Enable Driver Support
======
Focusrite's wonderful gesture towards supporting Linux.
Even though Linux as a desktop platform is evolving fast, it is still a choice for few music producers compared to Windows and macOS. A major reason being the lack of official support from several DAWs and hardware manufacturers.
Fortunately, all of that is changing, with big names getting involved with Linux. Curious? Well, recently, [PreSonus announced Linux support][1] for its Studio One DAW.
And, now, [Focusrite][2], one of the direct competitors of [PreSonus][3] has joined in 🤩They also manufacture audio interfaces and other professional audio equipment.
If you are a music producer or an enthusiast, you should know that these names often go as synonymous with each other. While I currently use PreSonus over Focusrite, both the companies and their products are a big deal for the mass market.
**Suggested Read** 📖
![][4]
### Focusrite's Pledge to Linux Support
**Geoffrey Bennett** is a user on the [LinuxMusicians][5] forums, known for his work to bring Linux support to popular Focusrite audio interfaces (Scarlett 2nd and 3rd gen).
Of course, it is a massive undertaking, unofficially.
To take that up a notch, he recently launched a fundraiser to get the latest audio interface by Focusrite, i.e., [Scarlett 4th gen][6].
Along with that, he had goals to work on podcast equipment by Focusrite.
![][7]
While the fundraiser was a success, **it also got attention from Focusrite** , and **they contributed a significant amount** to help him get the latest audio interface.
**Focusrite also offered to send him any devices he does not have and future products before release.**
To add cherry on top, they mentioned how their **engineering team can help him improve the development work**.
> While I had previously struggled to connect with the engineers or management at Focusrite, news of the overwhelming response to this fundraiser reached the top tiers there. Given the niche nature of Linux audio, I had kept my expectations in check, but this was beyond what I had imagined.
>
> I just got off a call with them where, beyond providing the hardware, they were inquiring about other ways they could support the development.
>
> Focusrite not only offered to send me any devices I didn't have in my collection but also proposed that for any future product releases, they will do their utmost to send me devices in advance. This means that Linux support could be ready right from the product launch!
>
> Furthermore, they are discussing how their engineering team could better help me to streamline the development process, and eliminate much of the guesswork.
>
> \- Geoffrey via [LinuxMusicians][8]
So, Focusrite wants to ensure that the Linux community gets to use their products with an improved experience. In other words, indirectly providing official support to the Linux platform.
And, this is big news!
These two big audio manufacturers **should persuade other software and hardware companies as well**. Whether it is in the form of official support for drivers, troubleshooting support for their customers, anything works.
Any kind of attention to the Linux platform will help improve the Linux audio ecosystem.
Audio professionals do not have to get locked in the proprietary platforms or pay for a license to get an operating system if more audio companies add Linux support.
_💬 I would be hopeful for that future! What do you think?_
* * *
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/focusrite-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/studio-one-linux/
[2]: https://focusrite.com/
[3]: https://www.presonus.com/
[4]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[5]: https://linuxmusicians.com/
[6]: https://focusrite.com/scarlett/4th-generation
[7]: https://news.itsfoss.com/content/images/2023/10/go-fund-me-focusrite.jpg
[8]: https://linuxmusicians.com/viewtopic.php?t=26173&start=15

View File

@ -1,121 +0,0 @@
[#]: subject: "Tor Browser 13.0 Release is an Exciting Upgrade for Privacy-Focused Users"
[#]: via: "https://news.itsfoss.com/tor-browser-13-0-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Tor Browser 13.0 Release is an Exciting Upgrade for Privacy-Focused Users
======
Step your private web experience with the new Tor Browser update!
The Tor Browser is one of the popular ways to access the [Tor network][1], it has been the first choice for many who want to circumvent restrictions placed on them, or even those who are privacy-conscious.
Of course, using Tor Browser is one of the easiest [ways to improve privacy][2].
While Tor Browser is based on Mozilla Firefox, it comes with its tweaks.
With the latest major update, let's see what has changed.
## 🆕 Tor Browser 13.0: What's New?
![][3]
According to the developers, Tor Browser 13.0 is here with a year's worth of changes that were pushed upstream and is based on [**Firefox ESR 115**][4] **.**
The transition to a newer Firefox release has also **allowed them to take advantage of the revamped accessibility engine** introduced with [Firefox 113][5].
So, users who use **assistive technologies** such as screen readers can now expect **better performance than ever before** when using Tor Browser.
The highlights of this release include:
* **Improved Letterboxing**
* **Updated Homepage**
* **Refreshed Logos**
**Suggested Read** 📖
![][6]
### Improved Letterboxing
![][7]
The letterboxing feature has received an important update with the Tor Browser 13.0 release.
The devs found **a problem with the default letterboxing dimensions** of 1000×1000 pixels where many modern websites would not behave correctly, resulting in those sites switching to layouts intended for tablets and smartphones.
Some sites would even serve a desktop site, but with only a horizontal scroll bar. To tackle this issue, they **tweaked the size of the windows** to be a maximum of 1400×900 pixels.
What this means for you, the end-user, is that you won't need to fiddle with the window sizes manually to get the right fit.
They also added that:
> Tor Browser for desktop should no longer trigger responsive break points on larger screens, and the vast majority of our desktop users will see a familiar landscape aspect-ratio more in-keeping with modern browsers.
>
> This particular size was chosen by crunching the numbers to offer greater real estate for new windows without increasing the number of buckets past the point of their usefulness.
![][8]
### Updated Homepage
![][9]
An update to the homepage on the Tor Browser has been a long time coming. With this release, it now showcases a refreshed logo (more on that below) and **a new feature to “Onionize” DuckDuckGo** for accessing its '.onion site'.
This also **goes hand-in-hand with the revamped accessibility engine** that I mentioned earlier, allowing better support for users of screen readers and other assistive technology.
They also fixed the dreaded “ **red screen of death** ” error that would pop up once in a while when opening a new homepage tab.
### Refreshed Logos
![][10]
Since the beginning of the article, you may have noticed something different about Tor Browser's logo.
Well, the **logos for all the builds of Tor Browser have been refreshed** with a more clean and modern look.
This familiar “onion logo” has been around for a while now, it was chosen by a community poll back in the day. It's nice to see that they are still putting in efforts to improve it.
These were just the key highlights of the release, you can go through the [official release notes][11] to explore all the technical fixes and other refinements.
**Suggested Read** 📖
![][6]
## 📥 Download Tor Browser 13.0
This release of the Tor browser is available for **Linux** , **Windows** , **Android,** and **macOS**. You can head over to the [official website][12] to get the package of your choice.
[Tor Browser 13.0][12]
* * *
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/tor-browser-13-0-release/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Tor_(network)
[2]: https://itsfoss.com/improve-privacy/
[3]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_1.png
[4]: https://www.mozilla.org/en-US/firefox/115.0esr/releasenotes/
[5]: https://www.mozilla.org/en-US/firefox/113.0/releasenotes/
[6]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[7]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_2.png
[8]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[9]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_3.png
[10]: https://news.itsfoss.com/content/images/2023/10/Tor_Browser_13.0_4.png
[11]: https://blog.torproject.org/new-release-tor-browser-130/
[12]: https://www.torproject.org/download/

View File

@ -1,101 +0,0 @@
[#]: subject: "An Interesting Bluetooth App for Linux Just Appeared!"
[#]: via: "https://news.itsfoss.com/bluetooth-app-linux-overskride/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
An Interesting Bluetooth App for Linux Just Appeared!
======
All things Bluetooth with this brand new app!
A new app for Linux has shown up that could be a one-stop app for all your Bluetooth needs.
Called ' **Overskride** ', it is **an open-source app** making an appearance with its first release. Even though it is in a work-in-progress state, it offers some decent functionality.
Allow me to take you through it.
### Overskride: What to Expect?
![][1]
Overskride will appeal to the Rust-heads out there, as it is **primarily written in the Rust programming language** with **a GTK4/libadwaita flavor** to it.
According to the developer, it is meant to be **a straightforward Bluetooth and Obex client** _(planned for future)_ that works irrespective of the desktop environment or window manager being used.
![][2]
Some key features include:
* **Trust/Block devices.**
* **Ability to send/receive files.**
* **Set connection timeout periods.**
* **Support for configuring the adapter.**
Looking at the screenshot above, you have got about every essential option to customize the Bluetooth device, and the connection, including the adapter name.
Of course, considering this is the first release of the app ever, one should not set high expectations. So, there would be room for improvements.
For now, here's **a little sneak peek of Overskride** to check what it offers.
I set it up using the provided Flatpak package on Ubuntu 22.04 LTS with GNOME 42.9. It didn't seem to have any issues with running on this setup.
Overskride was able to detect my smartphone, with various options to configure it.
![][3]
You can add the device to the trusted list or block list, rename it, and send a file as well.
I gave the **file transfer feature** a try, but before I could do that, I had to allow access to user files using [Flatseal][4] so that it could read the files on my system.
![][5]
The transfer began after I accepted the file transfer on my phone. The speeds were okay, and the file did get there in one piece, without any issues.
![][6]
I must say, for its first-ever release, the developer has presented us with a useful utility. I am excited to see what kind of improvements its future releases will offer.
The developer [was asked][7] by a Reddit user if there are any plans to **support for showing the battery percentage of wireless headphones.** In response to that, the dev mentioned that it is tricky to do so, as every device follows a different specification, which makes such a goal harder to achieve.
**Suggested Read** 📖
![][8]
### 📥 Get Overskride
For now, Overskride is only available via its [GitHub repo][9] as a Flatpak package. Or, you can compile it from source.
[Overskride (GitHub)][10]
I hope the developer publishes it on [Flathub][11] once it gets to a stable release to make it accessible for users.
* * *
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/bluetooth-app-linux-overskride/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/content/images/2023/10/Overskride_1.png
[2]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[3]: https://news.itsfoss.com/content/images/2023/10/Overskride_2.png
[4]: https://itsfoss.com/flatseal/
[5]: https://news.itsfoss.com/content/images/2023/10/Overskride_3.png
[6]: https://news.itsfoss.com/content/images/2023/10/Overskride_4.png
[7]: https://www.reddit.com/r/gnome/comments/17a5m99/full_release_of_my_bluetooth_app_d/k5b3ybg/
[8]: https://news.itsfoss.com/content/images/size/w256h256/2022/08/android-chrome-192x192.png
[9]: https://github.com/kaii-lb/overskride
[10]: https://github.com/kaii-lb/overskride/releases/
[11]: https://flathub.org/en

View File

@ -0,0 +1,157 @@
[#]: subject: "The Gecko Version of Midori Browser is Here!"
[#]: via: "https://news.itsfoss.com/midori-11/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
The Gecko Version of Midori Browser is Here!
======
Good to hear about a new Gecko-based browser. Sounds interesting!
We last heard about Midori back when [AstianGO was integrated][1] into the open-source browser for providing a more privacy-friendly search experience.
Even though **the Midori web browser has been around since 2007** , the development of it has seen its share of changes along the years, with the most major one being the merge with the [Astian Foundation][2] back in 2019.
After the merge, Midori **was an electron-powered browser based on Chromium** with a few privacy protection features in the mix.
However, Midori **has since parted ways with Chromium** for the next best thing, [**Gecko**][3], which you must've seen being used in browsers such as [**Firefox**][4], [**LibreWolf**][5], [**Tor Browser**][6] and more.
Allow me to show you what the **Gecko-based Midori 11 release** has to offer.
### Midori 11: What to Expect?
![][7]
Dubbed as the “ **fastest and lightest** ” version of Midori ever, this release has plenty of improvements to offer over its predecessors.
Some key highlights include:
* **Gecko-Based Revamp**
* **Visual Upgrades**
* **Workspaces Support**
#### Gecko-Based Revamp
![][8]
According to the developers, **basing Midori on Gecko has resulted in up to 15% better performance** compared to Chromium-based browsers, and **a bigger 20% uptick in performance** when compared to older Midori versions.
They also claim that **even though the memory consumption has increased** , they can still provide a lightweight and fast browsing experience.
#### Visual Upgrades
![][9]
The most obvious visual change with Midori is the **new sidebar menu** that comes with all the usual browser options such as Bookmarks, History, Downloads, and Notes. Pretty useful, kind of like Vivaldi offers.
There is also **support for WebApps** , by default, Midori includes the apps for **Google Translate** and **Astian Cloud**.
You **can also add other apps** such as Telegram, WhatsApp Web, or even other websites that support such functionality.
Then there are the **refreshed icons** across the browser, and **improved default themes** alongside the introduction of **two new themes, “Midori Fluerial UI” and “GNOME”.**
![][10]
Midori 11 also features **better tab management** , and **support for Light/Dark themes** , which can be set to follow your system's setting.
![][11]
#### Workspaces Support
![][12]
You must've noticed the “Work” title on all the screenshots above. Well, that is the **new “Workspaces” feature** in action.
With Midori 11, you can now **use multiple workspaces to multitask effortlessly**. You will first have to enable it via the settings to get started.
Go into the settings menu, then search for 'workspaces', then click on “Enable Workspaces”.
Thereafter, you can go into the “Workspace Settings…” menu for tweaking it further and add new workspaces.
![][13]
You can also **set specific icons to the workspaces** to differentiate between them easily.
**Suggested Read** 📖
![][14]
When all is set, you can switch between the workspaces by clicking on them in the sidebar.
![][15]
Midori will even **backup your workspaces at random time intervals** so that you don't lose them in the event of an abrupt shutdown.
Personally, I would like more web browsers based on Gecko, rather than Chromium.
And with what I experienced in my short usage with Midori 11, I can say that this **has the** **potential to be an[ **open-source Chrome alternative**][16] for Linux**.
That being said, **I did face some weird bugs** here and there, nothing a few patches can't fix, though.
#### 🛠️ Other changes
As for the rest of the improvements, here are some notable ones:
* Midori now **supports OTA updates** , similar to what you see on Firefox.
* It is now **possible to import browser data** from browsers like Vivaldi, Opera, Chrome, etc.
* [**Multi-Account Containers**][17] support for sorting your browsing into different color-coded tabs.
* Native integration with its **MidoriVPN** and [**Astian Cloud**][18] **** services.
You may go through the [official blog post][19] to learn more about this release.
### 📥 Get Midori 11
This new Gecko-based release of Midori is available for **Linux** , **Windows** , and **Android**. You can head over to the [official website][20] to grab the package of your choice.
[Midori][20]
If you are interested in the source code, you can visit Midori's [GitLab repo][21] for that.
I really hope they make it available on the [Flathub store][22] and the [Snap store][23] in the future for a more accessible experience for Linux users.
_💬 Are you willing to give Midori 11.0 a try? Let me know your thoughts in the comments._
* * *
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/midori-11/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/midori-astiango/
[2]: https://astian.org/
[3]: https://developer.mozilla.org/en-US/docs/Glossary/Gecko
[4]: https://www.mozilla.org/en-US/firefox/
[5]: https://librewolf.net/
[6]: https://news.itsfoss.com/tor-browser-13-0-release/
[7]: https://news.itsfoss.com/content/images/2023/10/Midori_11_a.png
[8]: https://news.itsfoss.com/content/images/2023/10/Midori_11_b.png
[9]: https://news.itsfoss.com/content/images/2023/10/Midori_11_c.png
[10]: https://news.itsfoss.com/content/images/2023/10/Midori_11_c2.png
[11]: https://news.itsfoss.com/content/images/2023/04/Follow-us-on-Google-News.png
[12]: https://news.itsfoss.com/content/images/2023/10/Midori_11_d-1.png
[13]: https://news.itsfoss.com/content/images/2023/10/Midori_11_e-1.png
[14]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[15]: https://news.itsfoss.com/content/images/2023/10/Midori_11_f-2.png
[16]: https://itsfoss.com/open-source-browsers-linux/
[17]: https://support.mozilla.org/en-US/kb/containers
[18]: https://cloud.astian.org/
[19]: https://astian.org/midori-en/explore-midori-11-0-faster-and-lighter-than-ever/
[20]: https://astian.org/midori-browser/download/
[21]: https://gitlab.com/midori-web
[22]: https://flathub.org/
[23]: https://snapcraft.io/store

View File

@ -0,0 +1,112 @@
[#]: subject: "Audacity 3.4 Release Adds a New Musical View"
[#]: via: "https://news.itsfoss.com/audacity-3-4-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Audacity 3.4 Release Adds a New Musical View
======
Audacity 3.4 makes it a better audio editor for musicians. Take a look!
Many consider Audacity as a useful audio editor or maybe even a digital audio workstation (DAW) to some extent.
In the past, we have noticed that with every point release, take, for example, [Audacity 3.2][1] there are important improvements on offer.
Continuing the trend is the recently introduced **Audacity 3.4** release, here with some new features and bug fixes.
Let's go through them.
**Suggested Read** 📖
![][2]
## 🆕 Audacity 3.4: What's New?
![][3]
As a point release, this release has some important new features to offer, such as:
* **New Musical View**
* **Better Exporting**
* **New Time Stretching Feature**
### New Musical View
![][4]
A new “ **Beats and measures grid** ” musical view has been added to Audacity that allows you to **easily align audio clips according to the tempo and rhythm of a project**.
The grid shows subdivisions of each measure according to the zoom level, and the clips can also be snapped to the nearest beat.
📋
You can access this by right-clicking anywhere on the ruler and selecting “Beats and Measures”.
### Better Exporting
![][5]
The exporter has been revamped, and now gives you **all the options in a single window** , that includes **a native file browser** , **format selection** , **audio options** and even **advanced options for 5.1 surround exports**.
### Time Stretching Feature
![][6]
It is now possible to **change the duration of audio clips without harming the pitch** of it.
This was thanks to the implementation of a **new time stretching algorithm** that has been tailored for music. The developers claim that it **outperforms many commercially available options**.
📋
Just hold the 'Alt' (for macOS: Option) button while hovering over the top third of a clip edge to start stretching.
### 🛠️ Other Changes
Other than that, here are some other notable changes:
* In-built **support for[ **Opus**][7]**.
* The **project sample rate doesn't change** randomly when importing audio.
* The **spectrogram colors have been improved** to look more uniform.
* **Stereo tracks have been simplified** , the left and right channels will now always have synced clip starts and ends, with the same sample rate on both channels.
You can also go through the [official blog][8] or the video linked below to know more about this release.
## 📥 Download Audacity 3.4
As Audacity is **a cross-platform app** , this new release is available for **Linux** , **Windows** , and **macOS** from the [official site][9] and its [GitHub repo][10].
[Audacity][9]
_💬 Will you be trying this release out? Share your thoughts in the comments below._
* * *
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/audacity-3-4-release/
作者:[Sourav Rudra][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/audacity-3-2-release/
[2]: https://news.itsfoss.com/content/images/size/w256h256/2022/08/android-chrome-192x192.png
[3]: https://news.itsfoss.com/content/images/2023/11/Audacity_3.4.png
[4]: https://news.itsfoss.com/content/images/2023/11/Audacity_3.4_c.png
[5]: https://news.itsfoss.com/content/images/2023/11/Audacity_3.4_b.png
[6]: https://news.itsfoss.com/content/images/2023/11/Audacity_3.4_d.png
[7]: https://en.wikipedia.org/wiki/Opus_(audio_format)
[8]: https://www.audacityteam.org/blog/audacity-3-4/
[9]: https://www.audacityteam.org/download/
[10]: https://github.com/audacity/audacity/releases

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/21/11/open-source-12-factor-app-methodology"
[#]: author: "Richard Conn https://opensource.com/users/richardaconn-0"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "trisbestever"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,111 +0,0 @@
[#]: subject: "Why Companies Need to Set Up an Open Source Program Office"
[#]: via: "https://www.opensourceforu.com/2022/08/why-companies-need-to-set-up-an-open-source-program-office/"
[#]: author: "Sakshi Sharma https://www.opensourceforu.com/author/sakshi-sharma/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Why Companies Need to Set Up an Open Source Program Office
======
*Managing the use of open source software and decreasing compliance risks is key to the success of any software product. An open source program office can help an organisation do just that. Find out how.*
Open source software (OSS) is integral to building a modern software solution. Be it an internal or a customer facing solution, organisations rely significantly on open source software today. OSS components are governed by their unique licence terms, and non-compliance with these can often expose organisations to security and intellectual property (IP) risks which eventually may hamper a companys brand value.
When development teams are delivering a software release, they are primarily trying to meet project deadlines. Therefore, the tracking of versions of components and libraries, or the third party code pulled into the project, is not as rigorous as it should be. This means that licences and vulnerable OSS components can enter the code base and be delivered to customers. This can be risky for both the customer and the company delivering the software solution.
Another increasingly challenging area is that of developers contributing to open source projects. Companies can reap numerous benefits if they do so. This includes keeping skills current, retention of staff, attracting developers to work for the organisation, and improving the image of the company. Many open source projects require developers to sign a contributor licence agreement. This states that any IP created by the developer belongs to the project and not to the contributing developer. In this scenario, organisations need to be careful that IP and trade secrets that are not open source are not being signed over to open source projects.
Developers need to be educated about open source licensing issues, determining what to leverage, when or how much they can contribute to the community, and what packages might bring risk to the organisations reputation. All this can be streamlined by putting a strategic policy and operations in place. One way of doing this is by creating an entity that is dedicated to working around all things open source—an entity called the open source program office (OSPO).
An OSPO creates an ecosystem for employees to use open source software in a way that compliance risks are kept at bay. The role of an OSPO is not limited to supervising open source usage; it is also responsible for contributing back to the community and managing the companys growth in the market by actively engaging in events, as well as conducting webinars and campaigns.
In this article we will see why there is a need for building an OSPO, and how it has emerged as a prominent entity for any open source policy and governance programme.
### Why should you have an OSPO?
With the wide use of open source software, regulating its usage and keeping the compliance strategy in check can be often overwhelming for the teams involved in the product development cycle.
Developers often overlook licence obligations, and sometimes the management or stakeholders are also not fully aware of the implications of non-compliance with these open source licences. OSPO handles open source software right from its on-boarding till the time it is delivered to the end user and everything inbetween, irrespective of whether it is being used for internal or external purposes.
An OSPO builds a solid foundation by starting compliance and regulatory checks in the early software development life cycle. This usually begins by guiding and aligning the involved team members towards a common path that benefits the organisations values. The OSPO puts in place policies and processes around open source usage and governs the roles and responsibilities across the company.
To conclude, it aligns the efforts of all relevant teams involved in building the product and helps increase the organisations capacity for better and effective use of open source.
| The rise of the OSPO |
| :- |
| Companies like Microsoft, Google and Netflix have well established OSPOs within their organisations. Many others, like Porsche and Spotify, are building their own OSPOs to leverage the usage of open source in an efficient way.
Here is what leaders from renowned companies have to say about OSPO practices.
“As a business, its a culture change,” explains Jeff McAffer, who ran Microsofts Open Source Program Office for years and now is a director of products at GitHub focused on promoting open source in enterprises. “Many companies, theyre not used to collaboration. Theyre not used to engaging with teams outside of their company.”
“Engineering, business, and legal stakeholders each have their own goals and roles, oftentimes making trade-offs between speed, quality, and risk,” explains Remy DeCausemaker, head of open source at Spotify. “An OSPO works to balance and connect these individual goals into a holistic strategy that reduces friction.”
Gil Yahuda, Verizon Medias OSPO leader, states, “We seek to create a working environment that talent wants to be part of. Our engineers know that they work in an open source friendly environment where they are supported and encouraged to work with the open source communities that are relevant to their work.” |
Here is what leaders from renowned companies have to say about OSPO practices.
* “As a business, its a culture change,” explains Jeff McAffer, who ran Microsofts Open Source Program Office for years and now is a director of products at GitHub focused on promoting open source in enterprises. “Many companies, theyre not used to collaboration. Theyre not used to engaging with teams outside of their company.”
* “Engineering, business, and legal stakeholders each have their own goals and roles, oftentimes making trade-offs between speed, quality, and risk,” explains Remy DeCausemaker, head of open source at Spotify. “An OSPO works to balance and connect these individual goals into a holistic strategy that reduces friction.”
* Gil Yahuda, Verizon Medias OSPO leader, states, “We seek to create a working environment that talent wants to be part of. Our engineers know that they work in an open source friendly environment where they are supported and encouraged to work with the open source communities that are relevant to their work.”
![Figure 1: OSPO prevalence by industry 2018-2021 (Source: https://github.com/todogroup/osposurvey/tree/master/2021)][1]
### The function of an OSPO
The function of an OSPO may vary from organisation to organisation depending on the number of its employees and the number of people that are part of the OSPO team. Another factor is the purpose of using open source. An organisation may only want to use open source software for building the product or may also look at contributing back to the community.
Evaluating factors such as which open source licences are appropriate or whether full-time employees should be contributing to an open source project may be part of the OSPOs role. Putting a contributor licence agreement (CLA) in place for developers that are willing to contribute and determining what open source components will help in accelerating a products growth and quality are some other roles of an OSPO.
Some of the key functions of an OSPO involve:
* Putting an open source compliance and governance policy in place to mitigate intellectual property risks to the organisation
* Educating developers towards better decision-making
* Defining policies that lay out the requirements and rules for working with open source across the company
* Monitoring the usage of open source software inside as well as outside the organisation
* Conducting meetings after every software release to discuss what went well and what could be done better with the OSS compliance process
* Accelerating the software development life cycle (SDLC)
* Transparency and coordination amongst different departments
* Streamlining processes to help mitigate risks at an early stage
* Encouraging members to contribute upstream to gain the collaborative and innovative benefits of open source projects
* Producing a report with suitable remediation and recommendations for the product team
* Preparing compliance artifacts and ensuring licence obligations are fulfilled
### Building an OSPO
The OSPO is typically staffed with personnel from multiple departments within the company. The process involves training and educating the relevant departments regarding open source compliance basics and the risks involved in its usage. It may provide legal and technical support services so that the open source requirement goals are met.
An OSPO may be formed by the following people within the organisation (this is a non-exhaustive list of people who can be a part of it):
* Principal/Chief: This role can be taken by the flag bearer, the one who runs the OSPO. The chief knows the various aspects of using open source like the effect of using different components, licence implications, development and contributing to the community. These requirements are entirely dependent on an organisations needs.
* Program manager: The program manager sets the requirements and objectives for the target solution. He/she works alongside the product and engineering teams to connect workflows. This includes ensuring that policies and tools are implemented in a developer-friendly manner.
* Legal support: Legal support can come from outside the firm or in-house, but is an important part of an OSPO. The legal role works closely with the program manager to define policies that govern OSS use, including which open source licences are allowed for each product, how to (or whether to) contribute to existing open source projects, and so on.
* Product and engineering teams/developers: The engineering team should be well-versed with open source licence(s) and their associated risks. The team must seek approval from the OSPO before consuming any open source component. The team may have to be trained with respect to open source compliance basics and its usage at regular intervals
* CTOs/CIOs/stakeholders: A companys leadership has a huge impact on the OSPO strategies. The stakeholders have a great say in the decision making process for any product/solutions delivery. Due to the nature of the OSPOs function within a company, the VP of engineering, CTO/CIO, or chief compliance/risk officer must get involved in the OSPO.
* IT teams: Having support from the IT department is very important. An OSPO may be tasked with implementing internal tools to improve developer efficiency, monitor open source compliance, or dictate open source security measures. IT teams are key in helping to connect workflows, and ensure policies are implemented in a developer-friendly manner.
In the 2021 State of OSPO Survey conducted by the TODO Group, the key findings were:
* There are many opportunities to educate companies about how OSPOs can benefit them.
OSPOs had a positive impact on their sponsors software practices, but their benefits differed depending on the size of an organisation.
* Companies that intended to start an OSPO hoped it would increase innovation, but setting a strategy and a budget remained top challenges to their goals.
* Almost half of the survey participants without an OSPO believed it would help their company, but of those that didnt think it would help, 35 per cent said they havent even considered it.
* 27 per cent of survey participants said a companys open source participation is very influential in their organisations buying decisions.
The use of open source software when building any software solution is almost inevitable today. However, the open source licence risks cannot be overseen. What is needed is a strategic streamlining process that helps combat the compliance issues that come in the way of using open source components effectively.
An OSPO helps set a regulatory culture by building a centralised dedicated team that educates employees and brings awareness regarding everything related to open source usage in an organisation. An OSPO can also work as a guide to fetch top talent from the industry, which will eventually be a boon for business goals.Sakshi Sharma
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/08/why-companies-need-to-set-up-an-open-source-program-office/
作者:[Sakshi Sharma][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/sakshi-sharma/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/07/Figure-1-OSPO-prevalence-by-industry-2018-2021-2.jpg

View File

@ -1,85 +0,0 @@
[#]: subject: "Security buzzwords to avoid and what to say instead"
[#]: via: "https://opensource.com/article/22/9/security-buzzword-alternatives"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Security buzzwords to avoid and what to say instead
======
Consider these thoughtful approaches to define what security really means in your open source project.
![Lock][1]
Image by: JanBaby, via Pixabay CC0.
Technology is a little famous for coming up with "buzzwords." Other industries do it, too, of course. "Story-driven" and "rules light" tabletop games are a big thing right now, "deconstructed" burgers and burritos are a big deal in fine dining. The problem with buzzwords in tech, though, is that they potentially actually affect your life. When somebody calls an application "secure," to influence you to use their product, there's an implicit promise being made. "Secure" must mean that something's secure. It's safe for you to use and trust. The problem is, the word "secure" can actually refer to any number of things, and the tech industry often uses it as such a general term that it becomes meaningless.
Because "secure" can mean both so much and so little, it's important to use the word "secure" carefully. In fact, it's often best not to use the word at all, and instead, just say what you actually mean.
### When "secure" means encrypted
Sometimes "secure" is imprecise shorthand for *encrypted*. In this context, "secure" refers to some degree of difficulty for outside observers to eavesdrop on your data.
**Don't say this:** "This website is resilient and secure."
That sounds pretty good! You're probably imagining a website that has several options for 2-factor authentication, zero-knowledge data storage, and steadfast anonymity policies.
**Say this instead:** "This website has a 99% uptime guarantee, and its traffic is encrypted and verifiable with SSL."
Not only is the intent of the promise clear now, it also explains how "secure" is achieved (it uses SSL) and what the scope of "secure" is.
Note that there's explicitly no promise here about privacy or anonymity.
### When "secure" means restricted access
Sometimes the term "secure" refers to application or device access. Without clarification, "secure" could mean anything from the useless *security by obscurity* model, to a simple htaccess password, to biometric scanners.
**Don't say this:** "We've secured the system for your protection."
**Say this instead:** "Our system uses 2-factor authentication."
### When "secure" means data storage
The term "secure" can also refer to the way your data is stored on a server or a device.
**Don't say this:** "This device stores your data with security in mind."
**Say this instead:** "This device uses full disk encryption to protect your data."
When remote storage is involved, "secure" may instead refer to who has access to stored data.
**Don't say this:** "Your data is secure."
**Say this instead:** "Your data is encrypted using PGP, and only you have the private key."
### When "secure" means privacy
These days, the term "privacy" is almost as broad and imprecise as "security." On one hand, you might think that "secure" must mean "private," but that's true only when "secure" has been defined. Is something private because it has a password barrier to entry? Or is something private because it's been encrypted and only you have the keys? Or is it private because the vendor storing your data knows nothing about you (aside from an IP address?) It's not enough to declare "privacy" any more than it is to declare "security" without qualification.
**Don't say this:** "Your data is secure with us."
**Say this instead:** "Your data is encrypted with PGP, and only you have the private key. We require no personal data from you, and can only identify you by your IP address."
Some sites make claims about how long IP addresses are retained in logs, and promises about never surrendering data to authorities without warrants, and so on. Those are beyond the scope of technological "security," and have everything to do with trust, so don't confuse them for technical specifications.
### Say what you mean
Technology is a complex topic with a lot of potential for confusion. Communication is important, and while shorthand and jargon can be useful in some settings, generally it's better to be precise. When you're proud of the "security" of your project, don't generalize it with a broad term. Make it clear to others what you're doing to protect your users, and make it equally clear what you consider out of scope, and communicate these things often. "Security" is a great feature, but it's a broad one, so don't be afraid to brag about the specifics.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/security-buzzword-alternatives
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/security-lock-password.jpg

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/10/innovative-open-organization-chart"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lkxed"
[#]: translator: "CanYellow"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -2,7 +2,7 @@
[#]: via: "https://www.opensourceforu.com/2022/06/ai-some-history-and-a-lot-more-about-matrices/"
[#]: author: "Deepu Benson https://www.opensourceforu.com/author/deepu-benson/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "toknow-gh"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,114 +0,0 @@
[#]: subject: "Artificial Intelligence: Explaining the Basics"
[#]: via: "https://www.opensourceforu.com/2022/08/artificial-intelligence-explaining-the-basics/"
[#]: author: "Deepu Benson https://www.opensourceforu.com/author/deepu-benson/"
[#]: collector: "lkxed"
[#]: translator: "toknow-gh"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Artificial Intelligence: Explaining the Basics
======
*If you are a student or professional interested in the latest trends in the computing world, you would have heard of terms like artificial intelligence, data science, machine learning, deep learning, etc. The first article in this series on artificial intelligence explains these terms, and sets the platform for a simple tutorial that will help beginners get started with AI.*
Today it is absolutely necessary for any student or professional in the field of computer science to learn at least the basics of AI, data science, machine learning and deep learning. However, where does one begin to do so?
To answer this question, I have gone through a number of textbooks and tutorials that teach AI. Some start at a theoretical level (a lot of maths), some teach you AI in a language-agnostic way (they dont care whether you know C, C++, Java, Python, or some other programming language), and yet others assume you are an expert in linear algebra, probability, statistics, etc. In my opinion, all of them are useful to a great extent. But the important question remains — where should an absolute beginner interested in AI begin his or her journey?
Frankly, there are many fine ways to begin your AI journey. However, I have a few concerns regarding many of them. Too much maths is a distraction while too little is like a driver who doesnt know where his/her cars engine is placed. Starting with advanced concepts is really effective if the potential AI engineer or data scientist is good in linear algebra, probability, statistics, etc. Starting with the very basics and ending at the middle of nowhere is fine only if the potential AI learner wants to end the journey at that particular point. Considering all these facts, I believe an AI tutorial for beginners should start at the very basics and end with a real AI project (small, yet one that will outperform any conventional program capable of performing the same task).
This series on AI will start from the very basics and will reach up to an intermediate level. But in addition to discussing topics in AI, I also want to cut the clutter (the name of a popular Indian news show) about the topics involved since there is a lot of confusion among people where terms like AI, machine learning, data science, etc, are concerned. AI based applications are often necessary due to the huge volume of data being produced every single day. A cursory search on the Internet will tell you that about 2.5 quintillion bytes of data are produced a day (quintillion is the enormous number 1018). However, do remember that most of this data is absolutely irrelevant to us, including tons of YouTube videos with no merit, emails sent without a second thought, reports on trivial matters on newspapers, and so on and so forth. However, this vast ocean of data also contains invaluable knowledge which often is priceless. Conventional software programs cannot carry out the Herculean task of processing such data. AI is one of the few technologies that can handle this information overload.
We also need to distinguish between fact and fiction as far as the power of AI is concerned. I remember a talk by an expert in the field of AI a few years back. He talked about an AI based image recognition system that was able to classify the images of Siberian Huskies (a breed of dogs) and Siberian snow wolves with absolute or near absolute accuracy. Search the Internet and you will see how similar the two animals look. Had the system been so accurate, it would have been considered a triumph of AI. Sadly, this was not the case. The image recognition system was only classifying the background images of the two animals. Images of Siberian Huskies (since they are domestic animals) almost always had some rectangular or round objects in the background, whereas the images of Siberian snow wolves (being wild animals located in Siberia) had snow in the background. Such examples have led to the need for AI with some guarantee for accuracy in recent years.
Indeed, AI has shown some of its true power in recent years. A simple example is the suggestions we get from a lot of websites like YouTube, Amazon, etc. Many times, I have been astonished by the suggestions I have received as it almost felt as if AI was able to read my mind. Whether such suggestions are good or bad for us in the long run is a hot topic of debate. Then there is the critical question, “Is AI good or bad?” I believe that a Terminator movie sort of future, where machines attack humans deliberately is far, far away in the future. However, the word deliberately in the previous sentence is very important. AI based systems at present could malfunction and accidentally hurt humans. However, many systems that claim the powers of AI are conventional software programs with a large number of if and for statements with no magic of AI in it. Thus, it is safe to say that we are yet to see the real power of AI in our daily lives. But whether that impact will be good (like curing cancer) or bad (deepfake videos of world leaders leading to riots and war) is yet to be seen. On a personal level, I believe AI is a boon and will drastically improve the quality of life of the coming generations.
### What is AI?
So, before we proceed any further, let us try to understand how AI, machine learning, deep learning, data science, etc, are related yet distinct from each other. Very often these terms are used (erroneously) as synonyms. First, let us consider a Venn diagram that represents the relationship between AI, machine learning, deep learning and data science (Figure 1). Please keep in mind that this is not the only such Venn diagram. Indeed, it is very plausible that you may find other Venn diagrams showing different relationships between the four different entities shown in Figure 1. However, in my opinion, Figure 1 is the most authentic diagram that captures the interrelationship between the different fields in question to the maximum extent.
![Figure 1: The AI hierarchy and data science][1]
First of all, let me make a disclaimer. Many of the definitions of the terms involved in this first article in this series on AI may not be mathematically the most accurate. I believe that formally defining every term with utmost precision at this level of our discussion is counterproductive and a wastage of time. However, in the subsequent articles in this series we will revisit these terms and formally define them. At this point in time of our discussion, consider AI as a set of programs that can mimic human intelligence to some extent. But what do I mean by human intelligence?
Imagine your AI program is a one-year old baby. As usual, this baby will learn his/her mother tongue simply by listening to the people speaking around him/her. He/she will soon learn to identify shapes, colours, objects, etc, without any difficulty at all. Further, he/she will be able to respond to the emotions of people around him. For example, any 3-year-old baby will know how to sweet talk his/her parents into giving him/her all the chocolates and lollipops he/she wants. Similarly, an AI program too will be able to sense and adapt to its surroundings, just like the baby. However, such true AI applications will be achieved only in the far future (if at all).
Figure 1 shows that machine learning is a strict subset of AI and as such one of the many techniques used to implement artificially intelligent systems. Machine learning involves techniques in which large data sets are used to train programs so that the necessary task can be carried out effectively. Further, the accuracy of performing a particular task increases with larger and larger training data sets. Notice that there are other techniques used to develop artificially intelligent systems like Boolean logic based systems, fuzzy logic based systems, genetic programming based systems, etc. However, nowadays machine learning is the most vibrant technology used to implement AI based systems. Figure 1 also shows that deep learning is a strict subset of machine learning, making it just one of the many machine learning techniques. However, here again I need to inform you that, currently, in practice, most of the serious machine learning techniques involve deep learning. At this point, I refrain even from trying to define deep learning. Just keep in mind that deep learning involves the use of large artificial neural networks.
Now, what is data science (the red circle) doing in Figure 1? Well, data science is a discipline of computer science/mathematics which deals with the processing and interpretation of large sets of data. By large, how large do I mean? Some of the corporate giants like Facebook claimed that their servers could handle a few petabytes of data as far back in 2010. So, when we say huge data, we mean terabytes and petabytes and not gigabytes of data. A lot of data science applications involve the use of AI, machine learning and deep learning techniques. Hence, it is a bit difficult to ignore data science when we discuss AI. However, data science involves a lot of conventional programming and database management techniques like the use of Apache Hadoop for Big Data analysis.
Henceforth, I will be using the abbreviations AI, ML, DL and DS as shown in Figure 1. The discussions in this series will mainly focus on AI and ML with frequent additional references to data science.
### The beginning of our journey and a few difficult choices to make
Now that we know the topics that will be covered in this series of articles, let us discuss the prerequisites for joining this tutorial. I plan to cover the content in such a way that any person who can operate a Linux machine (a person who can operate an MS Windows or a macOS machine is also fine, but some of the installation steps might require additional help) along with basic knowledge of mathematics and computer programming will definitely appreciate the power of AI, once he or she has meticulously gone through this series.
It is possible to learn AI in a language-agnostic way without worrying much about programming languages. However, our discussion will involve a lot of programming and will be executed based on a single programming language. So, before we fix our (programming) language of communication, let us review the top programming languages used for AI, ML, DL and DS applications. One of the earliest languages used for developing AI based applications was Lisp, a functional programming language. Prolog, a logic programming language, was used in the 1970s for the same purpose. We will discuss more about Lisp and Prolog in the coming articles when we focus on the history of AI.
Nowadays, programming languages like Java, C, C++, Scala, Haskell, MATLAB, R, Julia, etc, are also used for developing AI based applications. However, the huge popularity and widespread use of Python in developing AI based applications almost made the choice unanimous. Hence, from this point onwards we will proceed with our discussion on AI based on Python. However, let me caution you. From here onwards, we make a number of choices (or rather I am making the choices for you). The choices mostly depend on ease of use, popularity, and (on a few occasions) on my own comfort and familiarity with a software/technique as well as the best of my intentions to make this tutorial highly effective. However, I encourage you to explore any other potential programming language, software or tool we may not have chosen. Sometimes such an alternative choice may be the best for you in the long run.
Now we need to make another immediate choice — whether to use Python 2 or Python 3? Considering the youth and the long career ahead of many of the potential readers of this series, I will stick with Python 3. First, let us install Python 3 in our systems. Execute the command sudo apt install python3 in the Linux terminal to install the latest version of Python 3 in your Ubuntu system (Python 3 is probably already installed in your system). The installation of Python 3 in other Linux distributions is also very easy. It can be easily installed in MS Windows and macOS operating systems too. The following command will show you the version of Python 3 installed in your system:
```
python3 --version
Python 3.8.10
```
We need to install a lot of Python packages as we proceed through the series. Hence, we need to install a package management system. Some of the choices include pip, Conda, Mamba, etc. I chose pip as our package management system for this tutorial because it is relatively simple as well as the recommended installation tool for Python. Personally, I am of the opinion that both Conda and Mamba are more powerful than pip and you are welcome to try them out. However, I will stick with pip. The command sudo apt install python3-pip will install pip in an Ubuntu system. Notice that pip, Conda and Mamba are cross-platform software and can be installed in Linux, Windows and macOS systems. The command pip3 version shows the version of pip installed in your system, as shown below:
```
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
```
Now we need to install an integrated development environment (IDE) for Python. IDEs help programmers write, compile, debug and execute code very easily. There are many contenders for this position also. PyCharm, IDLE, Spyder, etc, are popular Python IDEs. However, since our primary aim is to develop AI and data science based applications, we consider two other heavy contenders — JupyterLab and Google Colab. Strictly speaking, they are not just IDEs; rather, they are very powerful Web based interactive development environments. Both work on Web browsers and offer immense power. JupyterLab is free and open source software supported by Project Jupyter, a non-profit organisation. Google Colab follows the freemium model, where the basic model is free and for any additional features a payment is required. I am of the opinion that Google Colab is more powerful and has more features than JupyterLab. However, the freemium model of Google Colab and my relative inexperience with it made me choose JupyterLab over Google Colab for this tutorial. But I strongly encourage you to get familiar with Google Colab at some point in your AI journey.
JupyterLab can be installed locally using the command pip3 install jupyterlab. The command jupyter-lab will execute JupyterLab in the default Web browser of your system. An older and similar Web based system called Jupyter Notebook is also provided by Project Jupyter. Jupyter Notebook can be locally installed with the command pip3 install notebook and can be executed using the command jupyter notebook. However, Jupyter Notebook is less powerful than JupyterLab, and it is now official that JupyterLab will eventually replace Jupyter Notebook. Hence, in this tutorial we will be using JupyterLab when the time comes. However, in the beginning stages of this tutorial we will be using the Linux terminal to run Python programs, and hence the immediate need for pip, the package management system.
Anaconda is a very popular distribution of Python and R programming languages for machine learning and data science applications. As potential AI engineers and data scientists, it is a good idea to get familiar with Anaconda also.
Now, we need to fix the most important aspect of this tutorial — the style in which we will cover the topics. There are a large number of Python libraries to support the development of AI based applications. Some of them are NumPy, SciPy, Pandas, Matplotlib, Seaborn, TensorFlow, Keras, Scikit-learn, PyTorch, etc. Many of the textbooks and tutorials on AI, machine learning and data science are based on complete coverage of one or more of these packages. Though such coverage of the features of a particular package is effective, I have planned a more maths-oriented tutorial. We will first discuss a maths concept required for developing AI applications, and then follow the discussion by introducing the necessary Python basics and the details of the Python libraries required. Thus, unlike most other tutorials, we will revisit Python libraries again and again to explore the features necessary for the implementation of certain maths concepts. However, at times, I will also request you to learn some basic concepts of Python and mathematics on your own. That settles the final question about the nature of this tutorial.
After all this buildup, it would be a sin if we stop this article at this point without discussing even a single line of Python code or a mathematical object necessary for AI. Hence, we move on to learn one of the most important topics in mathematics required for conquering AI and machine learning.
#### Vectors and matrices
A matrix is a rectangular array of numbers, symbols or mathematical expressions arranged in rows and columns. Figure 2 shows a 2 x 3 (pronounced 2 by 3) matrix having 2 rows and 3 columns. If you are familiar with programming, this matrix can be represented as a two-dimensional array in many popular programming languages. A matrix with only one row is called a row vector and a matrix with only one column is called a column vector. The vector [11, 22, 33] is an example of a row vector.
![Figure 2: A 2 x 3 matrix][2]
But why are matrices and vectors so important in a discussion on AI and machine learning? Well, they are the core of linear algebra, a branch of mathematics. Linear algebraic techniques are heavily used in AI and machine learning. Mathematicians have studied the properties and applications of matrices and vectors for centuries. Mathematicians like Gauss, Euler, Leibniz, Cayley, Cramer, and Hamilton have a theorem named after them in the fields of linear algebra and matrix theory. Thus, over the years, a lot of techniques have been developed in linear algebra to analyse the properties of matrices and vectors.
Complex data can often easily be represented in the form of a vector or a matrix. Let us see a simple example. Consider a person working in the field of medical transcription. From the medical records of a person named P, details of the age, height in centimetres, weight in kilograms, systolic blood pressure, diastolic blood pressure and fasting blood sugar level in milligrams/decilitres can be obtained. Further, such information can easily be represented as the row vector. As an example, P = [60, 160, 90, 130, 95, 160]. But here comes the first challenge in AI and machine learning. What if there are a billion health records. The task will be incomplete even if tens of thousands of professionals work to manually extract data from these billion health records. Hence, AI and machine learning applications try to extract data from these records using programs.
The second challenge in AI and machine learning is data interpretation. This is a vast field and there are a number of techniques worth exploring. We will go through the most relevant of them in the coming articles in this series. AI and machine learning based applications also face hardware challenges, in addition to mathematical/computational challenges. With the huge amount of data being processed, data storage, processor speed, power consumption, etc, also become a major challenge for AI based applications. Challenges apart, I believe it is time for us to write the first line of AI code.
We will write a simple Python script to add two vectors. For that, we use a Python library called NumPy. This is a Python library that supports multi-dimensional matrices (arrays) and a lot of mathematical functions to operate on them. The command pip3 install numpy will install the package NumPy for Python 3. Notice that NumPy would have been preinstalled if you were using JupyterLab, Google Colab or Anaconda. However, we will operate from the Linux terminal for the first few articles in this series on AI for ease of use. Execute the command python3 on the Linux terminal to work from the Python console. This console is software that allows line-by-line execution of Python code. Figure 3 shows the line-by-line execution of the Python code to add two vectors and show the output on the terminal.
![Figure 3: Python code to find the sum of two row vectors][3]
First, let us try to understand the code. An important note before we proceed any further. Since this tutorial assumes very little programming experience, I will label lines of code as either (basic) or (AI). Lines labelled (basic) are part of classical Python code, whereas lines labelled (AI) are part of the Python code for developing AI applications. I know such a classification is not necessary or very meaningful. However, I want to distinguish between basic Python and advanced Python so that programmers with both basic and intermediate skills in programming will find this tutorial useful.
The line of code import numpy as np (basic) imports the library NumPy and names it as np. The import statement in Python is similar to the #include statement of C/C++ to use header files and the import statement of Java to use packages. The lines of code a = np.array([11, 22, 33]) and b = np.array([44, 55, 66]) (AI) create two one-dimensional arrays named a and b. Let me do one more simplification for the sake of understanding. For the time being, assume a vector is equivalent to a one-dimensional array. The line of code c = np.add(a, b) (AI) adds the two vectors named a and b and stores the result in a vector named c. Of course, naming variables as a, b, c, etc, is a bad programming practice, but mathematicians tend to differ and name vectors as u, v, w, etc. If you are an absolute beginner in Python programming, please learn how variables in Python work.
Finally, the line of code print(c) (basic) prints the value of the object, the vector [55 77 99], on the terminal. This is a line of basic Python code. The vector c = [55=11+44 77=22+55 99=33+66] gives you a hint about vector and matrix addition. However, if you want to formally learn how vectors and matrices are added, and if you dont have any good mathematical textbooks on the topic available at your disposal, I suggest you go through the Wikipedia article on matrix addition. A quick search on the Internet will show you that a classic C, C++ or Java program to add two vectors takes a lot more of code. This itself shows how suitable Python is for handling vectors and matrices, the lifeline of linear algebra. The strength of Python will be further appreciated as we perform more and more complex vector operations.
Before we conclude this article, I want to make two disclaimers. First, the programming example just discussed above deals with the addition of two row vectors (1 x 3 matrices to be precise). But a real machine learning application might be dealing with a 1000000 X 1000000 matrix. However, with practice and patience we will be able to handle these. The second disclaimer is that many of the definitions given in this article involve gross simplifications and some hand-waving. But, as mentioned earlier, I will give formal definitions for any term I have defined loosely before the conclusion of this series.
Now it is time for us to wind up this article. I request all of you to install the necessary software mentioned in this article and run each and every line of code discussed here. We will begin the next article in this series by discussing the history, scope and future of AI, and then proceed deeper into matrix theory, the backbone of linear algebra.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/08/artificial-intelligence-explaining-the-basics/
作者:[Deepu Benson][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/deepu-benson/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-1-The-AI-hierarchy-and-data-science.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-2-A-2-X-3-matrix.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-3-Python-code-to-find-the-sum-of-two-row-vectors.jpg

View File

@ -1,117 +0,0 @@
[#]: subject: "How ODT files are structured"
[#]: via: "https://opensource.com/article/22/8/odt-files"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How ODT files are structured
======
Because OpenDocument Format (ODF) are based on open standards, you can use other tools to examine them and even extract data from them. You just need to know where to start.
Word processing files used to be closed, proprietary formats. In some older word processors, the document file was essentially a memory dump from the word processor. While this made for faster loading of the document into the word processor, it also made the document file format an opaque mess.
Around 2005, the Organization for the Advancement of Structured Information Standards (OASIS) group defined an open format for office documents of all types, the Open Document Format for Office Applications (ODF). You may also see ODF referred to as simply "OpenDocument Format" because it is an open standard based on the [OpenOffice.org's][4] XML file specification. ODF includes several file types, including ODT for OpenDocument Text documents. There's a lot to explore in an ODT file, and it starts with a zip file.
### Zip structure
Like all ODF files, ODT is actually an XML document and other files wrapped in a zip file container. Using zip means files take less room on disk, but it also means you can use standard zip tools to examine an ODF file.
I have an article about IT leadership called "Nibbled to death by ducks" that I saved as an ODT file. Since this is an ODF file, which is a zip file container, you can use unzip from the command line to examine it:
```
$ unzip -l 'Nibbled to death by ducks.odt'
Archive: Nibbled to death by ducks.odt
Length Date Time Name
39 07-15-2022 22:18 mimetype
12713 07-15-2022 22:18 Thumbnails/thumbnail.png
915001 07-15-2022 22:18 Pictures/10000201000004500000026DBF6636B0B9352031.png
10879 07-15-2022 22:18 content.xml
20048 07-15-2022 22:18 styles.xml
9576 07-15-2022 22:18 settings.xml
757 07-15-2022 22:18 meta.xml
260 07-15-2022 22:18 manifest.rdf
0 07-15-2022 22:18 Configurations2/accelerator/
0 07-15-2022 22:18 Configurations2/toolpanel/
0 07-15-2022 22:18 Configurations2/statusbar/
0 07-15-2022 22:18 Configurations2/progressbar/
0 07-15-2022 22:18 Configurations2/toolbar/
0 07-15-2022 22:18 Configurations2/popupmenu/
0 07-15-2022 22:18 Configurations2/floater/
0 07-15-2022 22:18 Configurations2/menubar/
1192 07-15-2022 22:18 META-INF/manifest.xml
970465 17 files
```
I want to highlight a few elements of the zip file structure:
1. The `mimetype` file contains a single line that defines the ODF document. Programs that process ODT files, such as a word processor, can use this file to verify the `MIME` type of the document. For an ODT file, this should always be:
```
application/vnd.oasis.opendocument.text
```
1. The `META-INF` directory has a single `manifest.xml` file in it. This file contains all the information about where to find other components of the ODT file. Any program that reads ODT files starts with this file to locate everything else. For example, the `manifest.xml` file for my ODT document contains this line that defines where to find the main content:
```
<manifest:file-entry manifest:full-path="content.xml" manifest:media-type="text/xml"/>
```
1. The `content.xml` file contains the actual content of the document.
2. My document includes a single screenshot, which is contained in the `Pictures` directory.
### Extracting files from an ODT file
Because the ODT document is just a zip file with a specific structure to it, you can extract files from it. You can start by unzipping the entire ODT file, such as with this unzip command:
```
$ unzip -q 'Nibbled to death by ducks.odt' -d Nibbled
```
A colleague recently asked for a copy of the image that I included in my article. I was able to locate the exact location of any embedded image by looking in the `META-INF/manifest.xml` file. The `grep` command can display any lines that describe an image:
```
$ cd Nibbled
$ grep image META-INF/manifest.xml
<manifest:file-entry manifest:full-path="Thumbnails/thumbnail.png" manifest:media-type="image/png"/>
<manifest:file-entry manifest:full-path="Pictures/10000201000004500000026DBF6636B0B9352031.png" manifest:media-type=" image/png/>
```
The image I'm looking for is saved in the `Pictures` folder. You can verify that by listing the contents of the directory:
```
$ ls -F
Configurations2/ manifest.rdf meta.xml Pictures/ styles.xml
content.xml META-INF/ mimetype settings.xml Thumbnails/
```
And here it is:
![Image of rubber ducks in two bowls][5]
Image by: (Jim Hall, CC BY-SA 40)
### OpenDocument Format
OpenDocument Format (ODF) files are an open file format that can describe word processing files (ODT), spreadsheet files (ODS), presentations (ODP), and other file types. Because ODF files are based on open standards, you can use other tools to examine them and even extract data from them. You just need to know where to start. All ODF files start with the `META-INF/manifest.xml` file, which is the "root" or "bootstrap" file for the rest of the ODF file format. Once you know where to look, you can find the rest of the content.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/8/odt-files
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/coffee_tea_laptop_computer_work_desk.png
[2]: https://unsplash.com/@jonasleupe?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/tea-cup-computer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: http://OpenOffice.org
[5]: https://opensource.com/sites/default/files/2022-07/ducks.png

View File

@ -2,7 +2,7 @@
[#]: via: "https://www.debugpoint.com/lxqt-arch-linux-install/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "suifeng333"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,215 +0,0 @@
[#]: subject: "Terminal Basics Series #1: Changing Directories in Linux Terminal"
[#]: via: "https://itsfoss.com/change-directories/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: "Cubik65536"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Terminal Basics Series #1: Changing Directories in Linux Terminal
======
The cd command in Linux allows you to change directories (folders). You just have to give the path to the directory.
```
cd path_to_directory
```
And here comes the first challenge if you are new to Linux. You are probably not sure about the path.
Let's tackle that first.
### Understanding paths in Linux
The path traces the location in the Linux directory structure. Everything starts at the root and then goes from there.
You can check your current location with the following:
```
pwd
```
It should show an output like /home/username. Of course, it will be your username.
As you can see, paths are composed of / and directory names. Path `/home/abhishek/scripts` means the folder scripts is inside the folder `abhishek`, which is inside the folder `home`. The first `/` is for root (from where the filesystem starts), the trailing / are separators for the directories.
![Path in Linux][1]
> 🖥️ Type `ls /` in the terminal and press enter. It will show you the content of the root directory. Try it.
Now, there are two ways to specify a path: absolute and relative.
**Absolute path**: It starts with the root and then traces the location from there. If a path starts with /, it is an absolute path.
**Relative path**: This path originates from your current location in the filesystem. If I am in the location /home/abhishek and I have to go to /home/abhishek/Documents, I can simply go to Documents instead of specifying the absolute path /home/abhishek/Documents.
Before I show you the difference between the two, you should get familiar with two special directory notations:
- . (single dot) denotes the current directory.
- .. (two dots) denote the parent directory taking you one directory above the current one.
Here's a pictorial representation.
![Absolute path vs relative path][2]
### Changing directory with cd command
Now that you are familiar with the concept of path, let's see how you can change the directory.
> 🖥️ If you **just type cd and press enter**, it will take you to your home directory from any location. Go on, try it.
Enter the following command to see the directories inside your home directories:
```
ls
```
This is what it shows to me:
```
[email protected]:~$ ls
Desktop Downloads Pictures Templates VirtualBoxVMs
Documents Music Public Videos
```
Yours may be similar but not exactly the same.
Let's say you want to go to the Documents directory. Since it is available under the current directory, it will be easier to use the relative path here:
```
cd Documents
```
> 💡 The default terminal emulators of most Linux distributions show you the current location in the prompt itself. You don't have to use pwd all the time just to know where you are.
![Most Linux terminal prompts show the current location][3]
Now, let's say you want to switch to the Templates directory that was located in your home directory.
You can use the relative path `../Templates` (.. takes you to the one directory above Documents to /home/username and from there you go to Templates).
But let's go for the absolute path instead. Please change 'abhishek' with your username.
```
cd /home/abhishek/Templates
```
Now you are in the Templates directory. How about going to the Downloads directory? Use the relative path this time:
```
cd ../Templates
```
Here's a replay of all the above directory change examples you just read.
![cd command example][4]
> 💡 Utilize the tab completion in the terminal. Start typing a few letters of the command and directory and hit the tab key. It will try to autocomplete or show you the possible options. 
### Troubleshooting
You may encounter a few common errors while changing the directories in Linux terminal.
#### No such file or directory
If you see an error like this while changing the directories:
> bash: cd: directory_name: No such file or directory
Then you made mistake with the path or name of the directories. Here are a few things to note.
- Make sure there is no typo in the directory name.
- Linux is case sensitive. Downloads and downloads are not the same.
- You are not specifying the correct path. Perhaps you are in some other location? Or did you miss the first / in the absolute path?
![Common examples of "no such file or directory" error][5]
#### Not a directory
If you see an error like this:
> bash: cd: filename: Not a directory
It means that you are trying to use the cd command with a file, not a directory (folder). Clearly, you cannot enter a file the same way you enter a folder and hence this error.
![Not a directory error with the cd command][6]
#### Too many arguments
Another common rookie Linux mistake:
> bash: cd: too many arguments
The cd commands take only one argument. That means that you can only specify one directory to the command.
If you specify more than one or mistyped a path by adding a space to the path, you'll see this error.
![Too many arguments error in Linux terminal][7]
> 🏋🏻 If you press `cd -`, it will take you to your previous directory. It's quite handy when you are switching between two distant locations. You don't have to type the long paths again.
### Special directory notations
Before ending this tutorial, let me quickly tell you about the special notation `~`. In Linux, ~ is a shortcut for the user's home directory.
If user `abhi` is running it, ~ would mean `/home/abhi` and if user `prakash` was running it, it would mean `/home/prakash`.
To summarize all the special directory notations you learned in this chapter of the terminal basics series:
| Notation | Description |
| :- | :- |
| . | Current directory |
| .. | Parent directory |
| ~ | Home directory |
| - | Previous directory |
### Test your knowledge
Here are a few simple exercises to test your newly learned knowledge of the path and the cd command.
Move to your home directory and create a nested directory structure with this command:
```
mkdir -p sample/dir1/dir2/dir3
```
Now, try this one by one:
- Go to the dir3 using either absolute or relative path
- Move to dir1 using relative path
- Now go to dir2 using the shortest path you can imagine
- Change to the sample directory using absolute path
- Go back to your home directory
> 🔑 Want to know if you got all of them right or not? Feel free to [share your answers in the It's FOSS Community][8].
Now that you know how to change directories, how about you learn about creating them?
I highly recommend reading this article to learn small but useful things about the terminals and the commands.
Stay tuned for more chapters in the Linux Terminal Basics series if you want to learn the essentials of the Linux command line.
And, of course, your feedback on this new series is welcome. What can I do to improve it?
--------------------------------------------------------------------------------
via: https://itsfoss.com/change-directories/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[Cubik65536](https://github.com/Cubik65536)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lkxed/
[1]: https://itsfoss.com/content/images/2023/02/path-linux.webp
[2]: https://itsfoss.com/content/images/2023/02/absolute-and-relative-path.png
[3]: https://itsfoss.com/content/images/2023/02/linux-terminal-prompt.png
[4]: https://itsfoss.com/content/images/2023/02/cd-command-example.svg
[5]: https://itsfoss.com/content/images/2023/02/common-errors-with-cd.png
[6]: https://itsfoss.com/content/images/2023/02/not-a-directory-error-linux.png
[7]: https://itsfoss.com/content/images/2023/02/too-many-arguments.png
[8]: https://itsfoss.community/t/exercise-in-changing-directories-in-linux-terminal/10177?ref=its-foss

View File

@ -1,313 +0,0 @@
[#]: subject: "Some notes on using nix"
[#]: via: "https://jvns.ca/blog/2023/02/28/some-notes-on-using-nix/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Some notes on using nix
======
Recently I started using a Mac for the first time. The biggest downside Ive noticed so far is that the package management is much worse than on Linux. At some point I got frustrated with homebrew because I felt like it was spending too much time upgrading when I installed new packages, and so I thought maybe Ill try the [nix][1] package manager!
nix has a reputation for being confusing (it has its whole own programming language!), so Ive been trying to figure out how to use nix in a way thats as simple as possible and does not involve managing any configuration files or learning a new programming language. Heres what Ive figured out so far! Well talk about how to:
- install packages with nix
- build a custom nix package for a C++ program called [paperjam][2]
- install a 5-year-old version of [hugo][3] with nix
As usual Ive probably gotten some stuff wrong in this post since Im still pretty new to nix. Im also still not sure how much I like nix its very confusing! But its helped me compile some software that I was struggling to compile otherwise, and in general it seems to install things faster than homebrew.
#### whats interesting about nix?
People often describe nix as “declarative package management”. I dont care that much about declarative package management, so here are two things that I appreciate about nix:
- It provides binary packages (hosted at [https://cache.nixos.org/][4]) that you can quickly download and install
- For packages which dont have binary packages, it makes it easier to compile them
I think that the reason nix is good at compiling software is that:
- you can have multiple versions of the same library or program installed at a time (you could have 2 different versions of libc for instance). For example I have two versions of node on my computer right now, one at `/nix/store/4ykq0lpvmskdlhrvz1j3kwslgc6c7pnv-nodejs-16.17.1` and one at `/nix/store/5y4bd2r99zhdbir95w5pf51bwfg37bwa-nodejs-18.9.1`.
- when nix builds a package, it builds it in isolation, using only the specific versions of its dependencies that you explicitly declared. So theres no risk that the package secretly depends on another package on your
system that you dont know about. No more fighting with `LD_LIBRARY_PATH`! - a lot of people have put a lot of work into writing down all of the dependencies of packages
Ill give a couple of examples later in this post of two times nix made it easier for me to compile software.
#### how I got started with nix
heres how I got started with nix:
- Install nix. I forget exactly how I did this, but it looks like theres an [official installer][5] and an [unofficial installer from zero-to-nix.com][6]. The [instructions for uninstalling nix on MacOS with the standard multi-user install][7] are a bit complicated, so it might be worth choosing an installation method with simpler uninstall instructions.
- Put `~/.nix-profile/bin` on my PATH
- Install packages with `nix-env -iA nixpkgs.NAME`
- Thats it.
Basically the idea is to treat `nix-env -iA` like `brew install` or `apt-get install`.
For example, if I want to install `fish`, I can do that like this:
```
nix-env -iA nixpkgs.fish
```
This seems to just download some binaries from [https://cache.nixos.org][8] pretty simple.
Some people use nix to install their Node and Python and Ruby packages, but I havent been doing that I just use `npm install` and `pip install` the same way I always have.
#### some nix features Im not using
There are a bunch of nix features/tools that Im not using, but that Ill mention. I originally thought that you _had_ to use these features to use nix, because most of the nix tutorials Ive read talk about them. But you dont have to use them.
- NixOS (a Linux distribution)
- [nix-shell][9]
- [nix flakes][10]
- [home-manager][11]
- [devenv.sh][12]
I wont go into these because I havent really used them and there are lots of explanations out there.
#### where are nix packages defined?
I think packages in the main nix package repository are defined in [https://github.com/NixOS/nixpkgs/][13]
It looks like you can search for packages at [https://search.nixos.org/packages][14]. The two official ways to search packages seem to be:
- `nix-env -qaP NAME`, which is very extremely slow and which I havent been able to get to actually work
- `nix --extra-experimental-features 'nix-command flakes' search nixpkgs NAME`, which does seem to work but is kind of a mouthful. Also all of the packages it prints out start with `legacyPackages` for some reason
I found a way to search nix packages from the command line that I liked better:
- Run `nix-env -qa '*' > nix-packages.txt` to get a list of every package in the Nix repository
- Write a short `nix-search` script that just greps `packages.txt` (`cat ~/bin/nix-packages.txt | awk '{print $1}' | rg "$1"`)
#### everything is installed with symlinks
One of nixs major design choices is that there isnt one single `bin` with all your packages, instead you use symlinks. There are a lot of layers of symlinks. A few examples of symlinks:
- `~/.nix-profile` on my machine is (indirectly) a symlink to `/nix/var/nix/profiles/per-user/bork/profile-111-link/`
- `~/.nix-profile/bin/fish` is a symlink to `/nix/store/afkwn6k8p8g97jiqgx9nd26503s35mgi-fish-3.5.1/bin/fish`
When I install something, it creates a new `profile-112-link` directory with new symlinks and updates my `~/.nix-profile` to point to that directory.
I think this means that if I install a new version of `fish` and I dont like it, I can easily go back just by running `nix-env --rollback` itll move me to my previous profile directory.
#### uninstalling packages doesnt delete them
If I uninstall a nix package like this, it doesnt actually free any hard drive space, it just removes the symlinks.
```
$ nix-env --uninstall oil
```
Im still not sure how to actually delete the package I ran a garbage collection like this, which seemed to delete some things:
```
$ nix-collect-garbage
...
85 store paths deleted, 74.90 MiB freed
```
But I still have `oil` on my system at `/nix/store/8pjnk6jr54z77jiq5g2dbx8887dnxbda-oil-0.14.0`.
Theres a more aggressive version of `nix-collect-garbage` that also deletes old versions of your profiles (so that you cant rollback)
```
$ nix-collect-garbage -d --delete-old
```
That doesnt delete `/nix/store/8pjnk6jr54z77jiq5g2dbx8887dnxbda-oil-0.14.0` either though and Im not sure why.
#### upgrading
It looks like you can upgrade nix packages like this:
```
nix-channel --update
nix-env --upgrade
```
(similar to `apt-get update && apt-get upgrade`)
I havent really upgraded anything yet. I think that if something goes wrong with an upgrade, you can roll back (because everything is immutable in nix!) with
```
nix-env --rollback
```
Someone linked me to [this post from Ian Henry][15] that talks about some confusing problems with `nix-env --upgrade` maybe it doesnt work the way youd expect? I guess Ill be wary around upgrades.
#### next goal: make a custom package of paperjam
After a few months of installing existing packages, I wanted to make a custom package with nix for a program called [paperjam][2] that wasnt already packaged.
I was actually struggling to compile `paperjam` at all even without nix because the version I had of `libiconv` I has on my system was wrong. I thought it might be easier to compile it with nix even though I didnt know how to make nix packages yet. And it actually was!
But figuring out how to get there was VERY confusing, so here are some notes about how I did it.
#### how to build an example package
Before I started working on my `paperjam` package, I wanted to build an example existing package just to make sure I understood the process for building a package. I was really struggling to figure out how to do this, but I asked in Discord and someone explained to me how I could get a working package from [https://github.com/NixOS/nixpkgs/][13] and build it. So here are those instructions:
**step 1:** Download some arbitrary package from [nixpkgs][13] on github, for example the `dash` package:
```
wget https://raw.githubusercontent.com/NixOS/nixpkgs/47993510dcb7713a29591517cb6ce682cc40f0ca/pkgs/shells/dash/default.nix -O dash.nix
```
**step 2**: Replace the first statement (`{ lib , stdenv , buildPackages , autoreconfHook , pkg-config , fetchurl , fetchpatch , libedit , runCommand , dash }:` with `with import <nixpkgs> {};` I dont know why you have to do this, but it works.
**step 3**: Run `nix-build dash.nix`
This compiles the package
**step 4**: Run `nix-env -i -f dash.nix`
This installs the package into my `~/.nix-profile`
Thats all! Once Id done that, I felt like I could modify the `dash` package and make my own package.
#### how I made my own package
`paperjam` has one dependency (`libpaper`) that also isnt packaged yet, so I needed to build `libpaper` first.
Heres `libpaper.nix`. I basically just wrote this by copying and pasting from other packages in the [nixpkgs][13] repository. My guess is whats happening here is that nix has some default rules for compiling C packages (like “run `make install`”), so the `make install` happens default and I dont need to configure it explicitly.
```
with import <nixpkgs> {};
stdenv.mkDerivation rec {
pname = "libpaper";
version = "0.1";
src = fetchFromGitHub {
owner = "naota";
repo = "libpaper";
rev = "51ca11ec543f2828672d15e4e77b92619b497ccd";
hash = "sha256-S1pzVQ/ceNsx0vGmzdDWw2TjPVLiRgzR4edFblWsekY=";
};
buildInputs = [ ];
meta = with lib; {
homepage = "https://github.com/naota/libpaper";
description = "libpaper";
platforms = platforms.unix;
license = with licenses; [ bsd3 gpl2 ];
};
}
```
Basically this just tells nix how to download the source from GitHub.
I built this by running `nix-build libpaper.nix`
Next, I needed to compile `paperjam`. Heres a link to the [nix package I wrote][16]. The main things I needed to do other than telling it where to download the source were:
- add some extra build dependencies (like `asciidoc`)
- set some environment variables for the install (`installFlags = [ "PREFIX=$(out)" ];`) so that it installed in the correct directory instead of `/usr/local/bin`.
I set the hashes by first leaving the hash empty, then running `nix-build` to get an error message complaining about a mismatched hash. Then I copied the correct hash out of the error message.
I figured out how to set `installFlags` just by running `rg PREFIX` in the nixpkgs repository I figured that needing to set a `PREFIX` was pretty common and someone had probably done it before, and I was right. So I just copied and pasted that line from another package.
Then I ran:
```
nix-build paperjam.nix
nix-env -i -f paperjam.nix
```
and then everything worked and I had `paperjam` installed! Hooray!
#### next goal: install a 5-year-old version of hugo
Right now I build this blog using Hugo 0.40, from 2018. I dont need any new features so I havent felt a need to upgrade. On Linux this is easy: Hugos releases are a static binary, so I can just download the 5-year-old binary from the [releases page][17] and run it. Easy!
But on this Mac I ran into some complications. Mac hardware has changed in the last 5 years, so the Mac Hugo binary I downloaded crashed. And when I tried to build it from source with `go build`, that didnt work either because Go build norms have changed in the last 5 years as well.
I was working around this by running Hugo in a Linux docker container, but I didnt love that: it was kind of slow and it felt silly. It shouldnt be that hard to compile one Go program!
Nix to the rescue! Heres what I did to install the old version of Hugo with nix.
#### installing Hugo 0.40 with nix
I wanted to install Hugo 0.40 and put it in my PATH as `hugo-0.40`. Heres how I did it. I did this in a kind of weird way, but it worked ([Searching and installing old versions of Nix packages][18] describes a probably more normal method).
**step 1**: Search through the nixpkgs repo to find Hugo 0.40
I found the `.nix` file here [https://github.com/NixOS/nixpkgs/blob/17b2ef2/pkgs/applications/misc/hugo/default.nix][19]
**step 2**: Download that file and build it
I downloaded that file (and another file called `deps.nix` in the same directory), replaced the first line with `with import <nixpkgs> {};`, and built it with `nix-build hugo.nix`.
That almost worked without any changes, but I had to make two changes:
- replace `with stdenv.lib` to `with lib` for some reason.
- rename the package to `hugo040` so that it wouldnt conflict with the other version of `hugo` that I had installed
**step 3**: Rename `hugo` to `hugo-0.40`
I write a little post install script to rename the Hugo binary.
```
postInstall = ''
mv $out/bin/hugo $out/bin/hugo-0.40
'';
```
I figured out how to run this by running `rg 'mv '` in the nixpkgs repository and just copying and modifying something that seemed related.
**step 4**: Install it
I installed into my `~/.nix-profile/bin` by running `nix-env -i -f hugo.nix`.
And it all works! I put the final `.nix` file into my own personal [nixpkgs repo][20] so that I can use it again later if I want.
#### reproducible builds arent magic, theyre really hard
I think its worth noting here that this `hugo.nix` file isnt magic the reason I can easily compile Hugo 0.40 today is that many people worked for a long time to make it possible to package that version of Hugo in a reproducible way.
#### thats all!
Installing `paperjam` and this 5-year-old version of Hugo were both surprisingly painless and actually much easier than compiling it without nix, because nix made it much easier for me to compile the `paperjam` package with the right version of `libiconv`, and because someone 5 years ago had already gone to the trouble of listing out the exact dependencies for Hugo.
I dont have any plans to get much more complicated with nix (and its still very possible Ill get frustrated with it and go back to homebrew!), but well see what happens! Ive found it much easier to start in a simple way and then start using more features if I feel the need instead of adopting a whole bunch of complicated stuff all at once.
I probably wont use nix on Linux Ive always been happy enough with `apt` (on Debian-based distros) and `pacman` (on Arch-based distros), and theyre much less confusing. But on a Mac it seems like it might be worth it. Well see! Its very possible in 3 months Ill get frustrated with nix and just go back to homebrew.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2023/02/28/some-notes-on-using-nix/
作者:[Julia Evans][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lkxed/
[1]: https://nixos.org/
[2]: https://mj.ucw.cz/sw/paperjam/
[3]: https://github.com/gohugoio/hugo/
[4]: https://cache.nixos.org/
[5]: https://nixos.org/download
[6]: https://zero-to-nix.com/concepts/nix-installer
[7]: https://nixos.org/manual/nix/stable/installation/installing-binary.html#macos
[8]: https://cache.nixos.org
[9]: https://nixos.org/guides/nix-pills/developing-with-nix-shell.html
[10]: https://nixos.wiki/wiki/Flakes
[11]: https://github.com/nix-community/home-manager
[12]: https://devenv.sh/
[13]: https://github.com/NixOS/nixpkgs/
[14]: https://search.nixos.org/packages
[15]: https://ianthehenry.com/posts/how-to-learn-nix/my-first-package-upgrade/
[16]: https://github.com/jvns/nixpkgs/blob/22b70a48a797538c76b04261b3043165896d8f69/paperjam.nix
[17]: https://github.com/gohugoio/hugo/releases/tag/v0.40
[18]: https://lazamar.github.io/download-specific-package-version-with-nix/
[19]: https://github.com/NixOS/nixpkgs/blob/17b2ef2/pkgs/applications/misc/hugo/default.nix
[20]: https://github.com/jvns/nixpkgs/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/23/4/retry-your-python-code-until-it-fails"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,131 +0,0 @@
[#]: subject: "Fedora Linux Flatpak cool apps to try for October"
[#]: via: "https://fedoramagazine.org/fedora-linux-flatpak-cool-apps-to-try-for-october/"
[#]: author: "Eduard Lucena https://fedoramagazine.org/author/x3mboy/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora Linux Flatpak cool apps to try for October
======
![][1]
Original image by Daimar Stein
This article introduces projects available in Flathub with installation instructions.
[Flathub][2] is the place to get and distribute apps for all of Linux. It is powered by Flatpak, allowing Flathub apps to run on almost any Linux distribution.
Please read “[Getting started with Flatpak][3]“. In order to enable flathub as your flatpak provider, use the instructions on the [flatpak site][4].
### Warehouse
[Warehouse][5] is a graphical utility to manage your installed flatpak applications and your flatpak remotes. Some of the most important features are:
* Viewing Flatpak Info
* Managing User Data
* Batch Actions
* Leftover Data Management
* Manage Remotes
You can install “Warehouse” by clicking the install button on the web site or manually using this command:
```
flatpak install flathub io.github.flattool.Warehouse
```
### Jogger
[Jogger][6] is an app for Gnome Mobile to track running and other workouts. It is built with GTK4, Libadwaita, Rust and Sqlite. Even though targeted for Gnome Mobile, it works pretty well under Gnome Shell and I find it very useful to keep my stats. Some of the features are:
* Track a workout using Geoclue location
* Import workouts from a Fitotrack export
* Manually enter a workout
* View workout details
* Edit a workout
* Delete a workout
* Calculate calories burned for workouts
You can install “Jogger” by clicking the install button on the web site or manually using this command:
```
flatpak install flathub xyz.slothlife.Jogger
```
### Kooha
[Kooha][7] is a simple screen recorder with a minimalist interface. You can just click the record button without having to configure a bunch of settings.
The main features of Kooha include the following:
* Record microphone, desktop audio, or both at the same time
* Support for WebM, MP4, GIF, and Matroska formats
* Select a monitor, a window, or a portion of the screen to record
* Multiple sources selection
* Configurable saving location, pointer visibility, frame rate, and delay
* It works very well on wayland.
```
flatpak install flathub io.github.seadve.Kooha
```
### Warzone 2100
And who doesnt love a classic Linux game?
[Warzone 2100][8] lets you command the forces of The Project in a battle to rebuild the world after mankind was nearly destroyed by nuclear missiles.
![][9]
Warzone 2100, released in 1999 and developed by Pumpkin Studios, is a ground-breaking and innovative 3D real-time strategy game.
In 2004 Eidos, in collaboration with Pumpkin Studios, released the source for the game under the terms of the GNU GPL. This release included everything but the music and in-game video sequences. These, however, were also later released.
This game has one problem: It uses an old platform package (org.kde.Platform 6.4). This means that it takes more space on disk but the fun is worth it!
You can install “Warzone 2100” by clicking the install button on the web site or manually using this command:
```
flatpak install flathub net.wz2100.wz2100
```
_**Warzone 2100**_ _**is also available as rpm on fedoras repositories**_
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-linux-flatpak-cool-apps-to-try-for-october/
作者:[Eduard Lucena][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/x3mboy/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2023/10/flatpak_for_October-816x345.jpg
[2]: https://flathub.org
[3]: https://fedoramagazine.org/getting-started-flatpak/
[4]: https://flatpak.org/setup/Fedora
[5]: https://flathub.org/apps/io.github.flattool.Warehouse
[6]: https://flathub.org/apps/xyz.slothlife.Jogger
[7]: https://flathub.org/apps/io.github.seadve.Kooha
[8]: https://flathub.org/apps/net.wz2100.wz2100
[9]: https://fedoramagazine.org/wp-content/uploads/2023/10/image-1024x738.png

View File

@ -0,0 +1,255 @@
[#]: subject: "Install, Configure and Use Remmina on Ubuntu"
[#]: via: "https://itsfoss.com/use-remmina/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install, Configure and Use Remmina on Ubuntu
======
Remmina is an **open-source desktop client written in GTK** that lets you connect to remote computers.
It is a simpler alternative to tools like **AnyDesk** and **TeamViewer**. You can connect through various protocols like **RDP, VNC, SSH, X2GO, HTTP(S)** , and more. The feature-set can be extended with the help of available plugins if you need.
Note that Remmina is only available for Linux users. So, if you have a Linux machine and want to connect to any other remote system, Remmina is perfect.
I should also mention that it is one of the [best remote desktop tools for Linux][1].
![][2]
Now that you know about [Remmina][3] as a capable remote desktop tool. How do you use it?
Let me tell you more about it:
### Get Started by Installing Remmina on Ubuntu
![][4]
Remmina comes pre-installed in many Linux distributions, including the latest releases of Ubuntu and Fedora.
In either case, you can find it available in the official repositories for most Linux distributions.
If you want to install the latest version, I recommend installing the Flatpak or the Snap package.
For this article, I installed the [Snap package][5] from the Ubuntu's Software Center. You can choose to get the Flatpak package from [Flathub][6] as well.
📋
The entire setup is tested on and written for systems on the same subnetwork. In simpler terms, your devices should be connected to the same router.
### Using Remmina to Connect to a Linux Remote Machine (Ubuntu)
To test this use-case, I tried connecting to my Linux laptop from my Linux desktop computer.
Before connecting, you need to know the IP address of the remote system. In my case, I went to system settings and utilized the graphical user interface **Wi-Fi→ Connection settings** ( _gear icon next to the network connected_ ) to check the IP address.
![][7]
You can also use the terminal and type in the following to get the IP address:
```
ip a
```
![][8]
You might have more things listed here, but the IP address right to "inet" under an Ethernet connection (you can spot that by the text field link/ether) is what we need.
In my case, the IP address is **192.168.1.14**
Next, you need two more things before you can connect to the remote machine:
* Enable remote desktop in the remote system you wish to connect.
* The **username** and the **password** for remote connection.
#### Enable remote connection on the remote system
To get that, navigate through **Settings → Sharing → Remote Desktop**.
![][9]
Here, you need to toggle (enable) **Remote Desktop** , check " **Enable Legacy VNC Protocol** ", and then **Remote Control** as shown in the screenshot below.
![][10]
We utilize the **Virtual Network Computing** (VNC) protocol to connect, which is the preferred way to connect to Linux computers for most.
As you can notice, you can find the username and the password under the " **Authentication** " section in the image above.
Once you have the IP, authentication details, and enabled remote sharing, you just need to **add a new connection profile** and fill up the details.
![][11]
Here's how that looks like:
![][12]
You can give it any name, and create a Group (if you need). For instance, I can have a group of home devices from a workplace, and keep it organized separately to identify remote computers easily.
For the protocol, use " **Remmina VNC Plugin** ".
The rest of the settings remain default, and ensure that the Quality is set to **good**. So, you can have a good user experience without requiring a superfast internet connection.
However, you can choose to change it if you think you have poor network connectivity.
Once you are done with setting the options, hit " **Save and Connect** ".
Of course, if you want to connect to a machine once from a guest computer, and do not want to save the details, you can avoid saving and hit " **Connect** ".
![][13]
You might be prompted for the connection for the first time. If you do not want this, you can change the authentication technique to " **Require a password** " in the remote desktop settings:
![][14]
And, voilà! You are now connected to the remote system.
![][15]
📋
You should change the quality settings before connecting to a remote system to avoid crashes.
While you are connected to the remote machine, you can take a screenshot, scale the window size, duplicate the connection, and do several more useful things from the left sidebar.
Now, let us move on to a Windows system.
### Using Remmina to Connect to a Windows Remote System
Just like a Linux (Ubuntu in our example) system, you have to enable remote desktop sharing before trying to connect to a Windows computer.
💡
For Windows, you can only find the Remote Desktop option in the ****Pro editions****.
So, if you have Windows 10/11 Home edition, you cannot enable Remote Desktop unless you upgrade it to the professional edition.
Considering you have a license to Windows pro edition on your remote system.
Here's how to enable Remote Desktop sharing:
1. Head to **Settings → System → Remote Desktop**. Now, enable the option as shown below:
![][16]
You will get a prompt to confirm the action. Proceed with it.
![][17]
2. Next, we need information to connect to this Windows device. For that, you need to your network settings, and hit " **Properties** " to get network information (IP address).
In this case, we have a wired connection.
![][18]
On the next screen, you will have the IP address of your system as shown below:
![][19]
In case you have a wireless connection, you will have to navigate to " **Change adapter options** " in the network status screen and then right-click on the **Wi-Fi adapter** connected to your computer.
Now, check the status of the Wi-Fi network and click on " **Details** " to get the IP address, as shown in the screenshots below:
![Checking Wi-Fi network details on Windows][20]
For the tutorial, I used the wired connection. So, the IP address for me remains:
**192.168.1.3**
I created a new remote connection profile with the above IP address and its protocol as **RDP (Remote Desktop Protocol)**.
![][21]
![][22]
The username and password are the same as your Windows computer credentials.
For me, it was my Microsoft account credentials linked to my Windows system. If you do not have a Microsoft account linked, it will be your local username and password.
Of course, you can choose to add a new user just for remote desktop if you like.
Considering I accidentally entered the local credentials instead of my microso[[email protected]][23] details, it prompted me again for authentication.
![][24]
Once you are done authenticating, you will get another prompt asking you to verify the certificate where your desktop name will also show up as follows:
![][25]
Accept the certificate. If you are incredibly cautious about the connection, you can pick to view the installed certificates on your system and verify by following [Microsoft's documentation][26].
![][27]
And, that is it! You are connected to the Windows remote machine.
**One important bit here** : you should **enable the dynamic resolution update** from the sidebar to make the resolution adapt to your screen size.
![][28]
For the tutorial, I have tested a Linux remote machine and a Windows system, which are the two most popular use-cases.
You need a static IP and IP forwarding to test the SSH connection protocol.
For the rest of the options in the sidebar, you get re-sizing the window, taking a screenshot of the remote machine, duplicating a connection, tweaking the connection quality for slow/fast performance, and more.
### Wrapping Up
Remmina works well for simple remote desktop connection, with advanced abilities for users who require it.
You can customize your remote connection with advanced options and SSH connection. However, if you do not know what you are doing, you probably do not need those options.
_How else do you use the Remmina app for remote connections? Let me know in the comments below._
--------------------------------------------------------------------------------
via: https://itsfoss.com/use-remmina/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/remote-desktop-tools/
[2]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[3]: https://www.remmina.org/
[4]: https://itsfoss.com/content/images/2023/10/remina-remote-linux-desktop-connected-1.png
[5]: https://snapcraft.io/remmina
[6]: https://flathub.org/apps/org.remmina.Remmina
[7]: https://itsfoss.com/content/images/2023/10/ip-address-settings-1.png
[8]: https://itsfoss.com/content/images/2023/10/find-ip-adress-terminal-1-1.png
[9]: https://itsfoss.com/content/images/2023/10/remote-desktop-enable-1.png
[10]: https://itsfoss.com/content/images/2023/10/remote-desktop-enable-options-1.png
[11]: https://itsfoss.com/content/images/2023/10/remmina-main-1.png
[12]: https://itsfoss.com/content/images/2023/10/remmina-connection-vnc-protocol-1.png
[13]: https://itsfoss.com/content/images/2023/10/remmina-incoming-desktop-connection-request-1.png
[14]: https://itsfoss.com/content/images/2023/10/remote-control-access-methods-1.png
[15]: https://itsfoss.com/content/images/2023/10/remote-desktop-screenshot-linux-computer.png
[16]: https://itsfoss.com/content/images/2023/10/windows-enable-remote-desktop.jpg
[17]: https://itsfoss.com/content/images/2023/10/prompt-enable-remote-desktop-windows.jpg
[18]: https://itsfoss.com/content/images/2023/10/network-properties-windows.jpg
[19]: https://itsfoss.com/content/images/2023/10/windows-ip-address.jpg
[20]: https://itsfoss.com/content/images/2023/10/wifi-status-windows-1.jpg
[21]: https://itsfoss.com/content/images/2023/10/list-of-devices-connected-remmina-1-1.png
[22]: https://itsfoss.com/content/images/2023/10/remmina-connect-windows-computer-rdp-1.png
[23]: https://itsfoss.com/cdn-cgi/l/email-protection
[24]: https://itsfoss.com/content/images/2023/10/remmina-authentication-microsoft-account-1-1.png
[25]: https://itsfoss.com/content/images/2023/10/remmina-accept-certificate-windows-1.png
[26]: https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-view-certificates-with-the-mmc-snap-in
[27]: https://itsfoss.com/content/images/2023/10/remmina-remote-desktop-windows.png
[28]: https://itsfoss.com/content/images/2023/10/dynamic-resolution.png

View File

@ -0,0 +1,174 @@
[#]: subject: "Install VSCodium on Fedora"
[#]: via: "https://itsfoss.com/install-vscodium-fedora/"
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install VSCodium on Fedora
======
Visual Studio Code (VS Code) is a popular cross-platform text editor developed by Microsoft. It's built on the Electron framework and is widely used by developers for coding and text editing tasks. The core of VS Code, known as "Code - OSS," is open source and distributed under the MIT License. However, Microsoft adds specific customizations and releases its branded version of the editor under a proprietary license.
To address concerns about [telemetry][1] and licensing, there's an alternative called "[VSCodium][2]," which is a community-driven, telemetry-disabled, and MIT-licensed version of VS Code.
![VSCodium running on Fedora 39.][3]
In this tutorial, I will guide you through the process of installing and running VSCodium on a Fedora Linux system.
There are three ways to do that:
1. Installing by downloading the rpm file from the release page. But, you need to repeat the process to get the package updated (which can get frustrating).
2. Adding [paulcarroty][4] repo (as mentioned on [VSCodium][5] website). So, that when you [update your Fedora system][6] VScodium will also get updated (which is quite seamless).
3. Using the flatpak version which you probably have tried by installing it from Gnome software already (I had a bad experience with the same so mileage may vary).
The first one is very straightforward i.e. downloading and [installing the rpm file][7] from the [release page][8]. So, let's cut to the chase and follow the other two methods.
### Method 1: Installing VSCodium by adding the repository
Open a terminal: You can open a terminal by searching for "Terminal" in the application menu.
Add the GPG key: So that the package manager trusts the packager of the repo.
```
sudo rpmkeys --import https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/-/raw/master/pub.gpg
```
Add the VSCodium repository: The following command will add the repo to your Fedora system.
```
printf "[gitlab.com_paulcarroty_vscodium_repo]\nname=download.vscodium.com\nbaseurl=https://download.vscodium.com/rpms/\nenabled=1\ngpgcheck=1\nrepo_gpgcheck=1\ngpgkey=https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/-/raw/master/pub.gpg\nmetadata_expire=1h" | sudo tee -a /etc/yum.repos.d/vscodium.repo
```
Install VSCodium: Now that you've added the VSCodium repository, you can install it using the following command (ones who love bleeding edge software can replace the package name to **codium-insiders** for installing the insiders version):
```
sudo dnf install codium
```
Launch VSCodium: You can now launch VSCodium either from the application menu or by running the following command in the terminal:
```
codium
```
#### Removing VSCodium
If you did not like VSCodium and the fact that it's based on Electron or maybe you switched to Neovim for good. You can remove it using this command:
```
sudo dnf remove codium
```
You may keep the repository and signature added to your system or maybe not (why not).
So, let's get rid of that repo:
```
sudo rm /etc/yum.repos.d/vscodium.repo
```
### Method 2: Install VSCodium using flatpak
You can install the flatpak also. So, here are the steps to install VSCodium using Flatpak on Fedora:
You can Install it straight away on Fedora by having flathub enabled which is probably enabled if you are using one of the latest iterations and have 3rd party repos enabled for Fedora. Just search for VSCodium in Gnome Software and click Install.
![Installing the flatpak from Gnome Software][9]
But, for the folks running older versions for some reason or they might have a fork with flatpak not enabled can follow suit.
Install Flatpak and enable Flathub: Fedora usually comes with Flatpak pre-installed. If it's not installed, you can install it using the following command:
```
sudo dnf install flatpak
```
To enable the Flathub repository, use the following command:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
Now that you have Flatpak set up, you can install VSCodium using the Flathub repository. Run the following command:
```
flatpak install flathub com.vscodium.codium
```
Launch VSCodium: You can launch VSCodium via Flatpak using the following command:
```
flatpak run com.vscodium.codium
```
Alternatively, you can also search for "VSCodium" in your application menu and launch it from there.
That's it! You should now have VSCodium installed and running on your Fedora system using Flatpak.
To remove it use the command below:
```
sudo flatpak uninstall com.vscodium.codium
```
### Here comes the Bottomline
If you have used VS Code then you will not find any difference whatsoever between both the software. It is just for the sake of openness and freedom from the evil telemetry of Microsoft's version.
Coming to Fedora I installed the flatpak version first but VSCodium did not show any window decorations in the Wayland session (which is default obviously). Making it difficult to navigate using mouse.
![VSCodium flatpak showing no window decorations.][10]
I tried some methods to fix it but had no luck due to flatpak's weird locations for config files. If someone has or can figure out the workaround for above issue can comment down below. But using the rpm version was seamless (Maybe the skeptics were right about alternative package management systems).
Extensions and Plugins were fine for most of the part. You can also follow this tutorial to install on any distro from The Enterprise Linux family e.g. Alma Linux, Rocky Linux etc.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-vscodium-fedora/
作者:[Anuj Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/anuj/
[b]: https://github.com/lujun9972
[1]: https://code.visualstudio.com/docs/getstarted/telemetry
[2]: https://itsfoss.com/vscodium/
[3]: https://itsfoss.com/content/images/2023/10/codium-on-fedora.png
[4]: https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo
[5]: https://vscodium.com/
[6]: https://itsfoss.com/update-fedora/
[7]: https://itsfoss.com/install-rpm-files-fedora/
[8]: https://github.com/VSCodium/vscodium/releases
[9]: https://itsfoss.com/content/images/2023/10/codium-flatpak-fedora.png
[10]: https://itsfoss.com/content/images/2023/10/codium-flatpak-no-decorations.png

View File

@ -0,0 +1,544 @@
[#]: subject: "Confusing git terminology"
[#]: via: "https://jvns.ca/blog/2023/11/01/confusing-git-terminology/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Confusing git terminology
======
Hello! Im slowly working on explaining git. One of my biggest problems is that after almost 15 years of using git, Ive become very used to gits idiosyncracies and its easy for me to forget whats confusing about it.
So I asked people [on Mastodon][1]:
> what git jargon do you find confusing? thinking of writing a blog post that explains some of gits weirder terminology: “detached HEAD state”, “fast-forward”, “index/staging area/staged”, “ahead of origin/main by 1 commit”, etc
I got a lot of GREAT answers and Ill try to summarize some of them here. Heres a list of the terms:
* [HEAD and “heads”][2]
* [“detached HEAD state”][3]
* [“ours” and “theirs” while merging or rebasing][4]
* [“Your branch is up to date with origin/main”][5]
* [HEAD^, HEAD~ HEAD^^, HEAD~~, HEAD^2, HEAD~2][6]
* [.. and …][7]
* [“can be fast-forwarded”][8]
* [“reference”, “symbolic reference”][9]
* [refspecs][10]
* [“tree-ish”][11]
* [“index”, “staged”, “cached”][12]
* [“reset”, “revert”, “restore”][13]
* [“untracked files”, “remote-tracking branch”, “track remote branch”][14]
* [checkout][15]
* [reflog][16]
* [merge vs rebase vs cherry-pick][17]
* [rebase onto][18]
* [commit][19]
* [more confusing terms][20]
Ive done my best to explain whats going on with these terms, but they cover basically every single major feature of git which is definitely too much for a single blog post so its pretty patchy in some places.
### `HEAD` and “heads”
A few people said they were confused by the terms `HEAD` and `refs/heads/main`, because it sounds like its some complicated technical internal thing.
Heres a quick summary:
* “heads” are “branches”. Internally in git, branches are stored in a directory called `.git/refs/heads`. (technically the [official git glossary][21] says that the branch is all the commits on it and the head is just the most recent commit, but theyre 2 different ways to think about the same thing)
* `HEAD` is the current branch. Its stored in `.git/HEAD`.
I think that “a `head` is a branch, `HEAD` is the current branch” is a good candidate for the weirdest terminology choice in git, but its definitely too late for a clearer naming scheme so lets move on.
There are some important exceptions to “HEAD is the current branch”, which well talk about next.
### “detached HEAD state”
Youve probably seen this message:
```
$ git checkout v0.1
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
[...]
```
Heres the deal with this message:
* In Git, usually you have a “current branch” checked out, for example `main`.
* The place the current branch is stored is called `HEAD`.
* Any new commits you make will get added to your current branch, and if you run `git merge other_branch`, that will also affect your current branch
* But `HEAD` doesnt **have** to be a branch! Instead it can be a commit ID.
* Git calls this state (where HEAD is a commit ID instead of a branch) “detached HEAD state”
* For example, you can get into detached HEAD state by checking out a tag, because a tag isnt a branch
* if you dont have a current branch, a bunch of things break:
* `git pull` doesnt work at all (since the whole point of it is to update your current branch)
* neither does `git push` unless you use it in a special way
* `git commit`, `git merge`, `git rebase`, and `git cherry-pick` **do** still work, but theyll leave you with “orphaned” commits that arent connected to any branch, so those commits will be hard to find
* You can get out of detached HEAD state by either creating a new branch or switching to an existing branch
### “ours” and “theirs” while merging or rebasing
If you have a merge conflict, you can run `git checkout --ours file.txt` to pick the version of `file.txt` from the “ours” side. But which side is “ours” and which side is “theirs”?
I always find this confusing and I never use `git checkout --ours` because of that, but I looked it up to see which is which.
For merges, heres how it works: the current branch is “ours” and the branch youre merging in is “theirs”, like this. Seems reasonable.
```
$ git checkout merge-into-ours # current branch is "ours"
$ git merge from-theirs # branch we're merging in is "theirs"
```
For rebases its the opposite the current branch is “theirs” and the target branch were rebasing onto is “ours”, like this:
```
$ git checkout theirs # current branch is "theirs"
$ git rebase ours # branch we're rebasing onto is "ours"
```
I think the reason for this is that under the hood `git rebase main` is merging the current branch into main (its like `git checkout main; git merge current_branch`), but I still find it confusing.
[This nice tiny site][22] explains the “ours” and “theirs” terms.
A couple of people also mentioned that VSCode calls “ours”/“theirs” “current change”/“incoming change”, and that its confusing in the exact same way.
### “Your branch is up to date with origin/main
This message seems straightforward its saying that your `main` branch is up to date with the origin!
But its actually a little misleading. You might think that this means that your `main` branch is up to date. It doesnt. What it **actually** means is if you last ran `git fetch` or `git pull` 5 days ago, then your `main` branch is up to date with all the changes **as of 5 days ago**.
So if you dont realize that, it can give you a false sense of security.
I think git could theoretically give you a more useful message like “is up to date with the origins `main` **as of your last fetch 5 days ago** ” because the time that the most recent fetch happened is stored in the reflog, but it doesnt.
### `HEAD^`, `HEAD~` `HEAD^^`, `HEAD~~`, `HEAD^2`, `HEAD~2`
Ive known for a long time that `HEAD^` refers to the previous commit, but Ive been confused for a long time about the difference between `HEAD~` and `HEAD^`.
I looked it up, and heres how these relate to each other:
* `HEAD^` and `HEAD~` are the same thing (1 commit ago)
* `HEAD^^^` and `HEAD~~~` and `HEAD~3` are the same thing (3 commits ago)
* `HEAD^3` refers the the third parent of a commit, and is different from `HEAD~3`
This seems weird why are `HEAD~` and `HEAD^` the same thing? And whats the “third parent”? Is that the same thing as the parents parents parent? (spoiler: it isnt) Lets talk about it!
Most commits have only one parent. But merge commits have multiple parents theyre merging together 2 or more commits. In Git `HEAD^` means “the parent of the HEAD commit”. But what if HEAD is a merge commit? What does `HEAD^` refer to?
The answer is that `HEAD^` refers to the the **first** parent of the merge, `HEAD^2` is the second parent, `HEAD^3` is the third parent, etc.
But I guess they also wanted a way to refer to “3 commits ago”, so `HEAD^3` is the third parent of the current commit (which may have many parents if its a merge commit), and `HEAD~3` is the parents parents parent.
I think in the context of the merge commit ours/theirs discussion earlier, `HEAD^` is “ours” and `HEAD^2` is “theirs”.
### `..` and `...`
Here are two commands:
* `git log main..test`
* `git log main...test`
Whats the difference between `..` and `...`? I never use these so I had to look it up in [man git-range-diff][23]. It seems like the answer is that in this case:
```
A - B main
\
C - D test
```
* `main..test` is commits C and D
* `test..main` is commit B
* `main...test` is commits B, C, and D
But it gets worse: apparently `git diff` also supports `..` and `...`, but they do something completely different than they do with `git log`? I think the summary is:
* `git log test..main` shows changes on `main` that arent on `test`, whereas `git log test...main` shows changes on _both_ sides.
* `git diff test..main` shows `test` changes _and_ `main` changes (it diffs `B` and `D`) whereas `git diff test...main` diffs `A` and `D` (it only shows you the diff on one side).
[this blog post][24] talks about it a bit more.
### “can be fast-forwarded”
Heres a very common message youll see in `git status`:
```
$ git status
On branch main
Your branch is behind 'origin/main' by 2 commits, and can be fast-forwarded.
(use "git pull" to update your local branch)
```
What does “fast-forwarded” mean? Basically its trying to say that the two branches look something like this: (newest commits are on the right)
```
main: A - B - C
origin/main: A - B - C - D - E
```
or visualized another way:
```
A - B - C - D - E (origin/main)
|
main
```
Here `origin/main` just has 2 extra commits that `main` doesnt have, so its easy to bring `main` up to date we just need to add those 2 commits. Literally nothing can possibly go wrong theres no possibility of merge conflicts. A fast forward merge is a very good thing! Its the easiest way to combine 2 branches.
After running `git pull`, youll end up this state:
```
main: A - B - C - D - E
origin/main: A - B - C - D - E
```
Heres an example of a state which **cant** be fast-forwarded.
```
A - B - C - X (main)
|
- - D - E (origin/main)
```
Here `main` has a commit that `origin/main` doesnt have (`X`). So you cant do a fast forward. In that case, `git status` would say:
```
$ git status
Your branch and 'origin/main' have diverged,
and have 1 and 2 different commits each, respectively.
```
### “reference”, “symbolic reference”
Ive always found the term “reference” kind of confusing. There are at least 3 things that get called “references” in git
* branches and tags like `main` and `v0.2`
* `HEAD`, which is the current branch
* things like `HEAD^^^` which git will resolve to a commit ID. Technically these are probably not “references”, I guess git [calls them][25] “revision parameters” but Ive never used that term.
“symbolic reference” is a very weird term to me because personally I think the only symbolic reference Ive ever used is `HEAD` (the current branch), and `HEAD` has a very central place in git (most of gits core commands behaviour depends on the value of `HEAD`), so Im not sure what the point of having it as a generic concept is.
### refspecs
When you configure a git remote in `.git/config`, theres this `+refs/heads/main:refs/remotes/origin/main` thing.
```
[remote "origin"]
url = git@github.com:jvns/pandas-cookbook
fetch = +refs/heads/main:refs/remotes/origin/main
```
I dont really know what this means, Ive always just used whatever the default is when you do a `git clone` or `git remote add`, and Ive never felt any motivation to learn about it or change it from the default.
### “tree-ish”
The man page for `git checkout` says:
```
git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] [--] <pathspec>...
```
Whats `tree-ish`??? What git is trying to say here is when you run `git checkout THING .`, `THING` can be either:
* a commit ID (like `182cd3f`)
* a reference to a commit ID (like `main` or `HEAD^^` or `v0.3.2`)
* a subdirectory **inside** a commit (like `main:./docs`)
* I think thats it????
Personally Ive never used the “directory inside a commit” thing and from my perspective “tree-ish” might as well just mean “commit or reference to commit”.
### “index”, “staged”, “cached”
All of these refer to the exact same thing (the file `.git/index`, which is where your changes are staged when you run `git add`):
* `git diff --cached`
* `git rm --cached`
* `git diff --staged`
* the file `.git/index`
Even though they all ultimately refer to the same file, theres some variation in how those terms are used in practice:
* Apparently the flags `--index` and `--cached` do not generally mean the same thing. I have personally never used the `--index` flag so Im not going to get into it, but [this blog post by Junio Hamano][26] (gits lead maintainer) explains all the gnarly details
* the “index” lists untracked files (I guess for performance reasons) but you dont usually think of the “staging area” as including untracked files”
### “reset”, “revert”, “restore”
A bunch of people mentioned that “reset”, “revert” and “restore” are very similar words and its hard to differentiate them.
I think its made worse because
* `git reset --hard` and `git restore .` on their own do basically the same thing. (though `git reset --hard COMMIT` and `git restore --source COMMIT .` are completely different from each other)
* the respective man pages dont give very helpful descriptions:
* `git reset`: “Reset current HEAD to the specified state”
* `git revert`: “Revert some existing commits”
* `git restore`: “Restore working tree files”
Those short descriptions do give you a better sense for which noun is being affected (“current HEAD”, “some commits”, “working tree files”) but they assume you know what “reset”, “revert” and “restore” mean in this context.
Here are some short descriptions of what they each do:
* `git revert COMMIT`: Create a new commit thats the “opposite” of COMMIT on your current branch (if COMMIT added 3 lines, the new commit will delete those 3 lines)
* `git reset --hard COMMIT`: Force your current branch back to the state it was at `COMMIT`, erasing any new changes since `COMMIT`. Very dangerous operation.
* `git restore --source=COMMIT PATH`: Take all the files in `PATH` back to how they were at `COMMIT`, without changing any other files or commit history.
### “untracked files”, “remote-tracking branch”, “track remote branch”
Git uses the word “track” in 3 different related ways:
* `Untracked files:` in the output of `git status`. This means those files arent managed by Git and wont be included in commits.
* a “remote tracking branch” like `origin/main`. This is a local reference, and its the commit ID that `main` pointed to on the remote `origin` the last time you ran `git pull` or `git fetch`.
* “branch foo set up to **track** remote branch bar from origin”
The “untracked files” and “remote tracking branch” thing is not too bad they both use “track”, but the context is very different. No big deal. But I think the other two uses of “track” are actually quite confusing:
* `main` is a branch that tracks a remote
* `origin/main` is a remote-tracking branch
But a “branch that tracks a remote” and a “remote-tracking branch” are different things in Git and the distinction is pretty important! Heres a quick summary of the differences:
* `main` is a branch. You can make commits to it, merge into it, etc. Its often configured to “track” the remote `main` in `.git/config`, which means that you can use `git pull` and `git push` to push/pull changes.
* `origin/main` is not a branch. Its a “remote-tracking branch”, which is not a kind of branch (Im sorry). You **cant** make commits to it. The only way you can update it is by running `git pull` or `git fetch` to get the latest state of `main` from the remote.
Id never really thought about this ambiguity before but I think its pretty easy to see why folks are confused by it.
### checkout
Checkout does two totally unrelated things:
* `git checkout BRANCH` switches branches
* `git checkout file.txt` discards your unstaged changes to `file.txt`
This is well known to be confusing and git has actually split those two functions into `git switch` and `git restore` (though you can still use checkout if, like me, you have 15 years of muscle memory around `git checkout` that you dont feel like unlearning)
Also personally after 15 years I still cant remember the order of the arguments to `git checkout main file.txt` for restoring the version of `file.txt` from the `main` branch.
I think sometimes you need to pass `--` to `checkout` as an argument somewhere to help it figure out which argument is a branch and which ones are paths but I never do that and Im not sure when its needed.
### reflog
Lots of people mentioning reading reflog as `re-flog` and not `ref-log`. I wont get deep into the reflog here because this post is REALLY long but:
* “reference” is an umbrella term git uses for branches, tags, and HEAD
* the reference log (“reflog”) gives you the history of everything a reference has ever pointed to
* It can help get you out of some VERY bad git situations, like if you accidentally delete an important branch
* I find it one of the most confusing parts of gits UI and I try to avoid needing to use it.
### merge vs rebase vs cherry-pick
A bunch of people mentioned being confused about the difference between merge and rebase and not understanding what the “base” in rebase was supposed to be.
Ill try to summarize them very briefly here, but I dont think these 1-line explanations are that useful because people structure their workflows around merge / rebase in pretty different ways and to really understand merge/rebase you need to understand the workflows. Also pictures really help. That could really be its whole own blog post though so Im not going to get into it.
* merge creates a single new commit that merges the 2 branches
* rebase copies commits on the current branch to the target branch, one at a time.
* cherry-pick is similar to rebase, but with a totally different syntax (one big difference is that rebase copies commits FROM the current branch, cherry-pick copies commits TO the current branch)
### `rebase --onto`
`git rebase` has an flag called `onto`. This has always seemed confusing to me because the whole point of `git rebase main` is to rebase the current branch **onto** main. So whats the extra `onto` argument about?
I looked it up, and `--onto` definitely solves a problem that Ive rarely/never actually had, but I guess Ill write down my understanding of it anyway.
```
A - B - C (main)
\
D - E - F - G (mybranch)
|
otherbranch
```
Imagine that for some reason I just want to move commits `F` and `G` to be rebased on top of `main`. I think theres probably some git workflow where this comes up a lot.
Apparently you can run `git rebase --onto main otherbranch mybranch` to do that. It seems impossible to me to remember the syntax for this (there are 3 different branch names involved, which for me is too many), but I heard about it from a bunch of people so I guess it must be useful.
### commit
Someone mentioned that they found it confusing that commit is used both as a verb and a noun in git.
for example:
* verb: “Remember to commit often”
* noun: “the most recent commit on `main`
My guess is that most folks get used to this relatively quickly, but this use of “commit” is different from how its used in SQL databases, where I think “commit” is just a verb (you “COMMIT” to end a transaction) and not a noun.
Also in git you can think of a Git commit in 3 different ways:
1. a **snapshot** of the current state of every file
2. a **diff** from the parent commit
3. a **history** of every previous commit
None of those are wrong: different commands use commits in all of these ways. For example `git show` treats a commit as a diff, `git log` treats it as a history, and `git restore` treats it as a snapshot.
But gits terminology doesnt do much to help you understand in which sense a commit is being used by a given command.
### more confusing terms
Here are a bunch more confusing terms. I dont know what a lot of these mean.
things I dont really understand myself:
* “the git pickaxe” (maybe this is `git log -S` and `git log -G`, for searching the diffs of previous commits?)
* submodules (all I know is that they dont work the way I want them to work)
* “cone mode” in git sparse checkout (no idea what this is but someone mentioned it)
things that people mentioned finding confusing but that I left out of this post because it was already 3000 words:
* blob, tree
* the direction of “merge”
* “origin”, “upstream”, “downstream”
* that `push` and `pull` arent opposites
* the relationship between `fetch` and `pull` (pull = fetch + merge)
* git porcelain
* subtrees
* worktrees
* the stash
* “master” or “main” (it sounds like it has a special meaning inside git but it doesnt)
* when you need to use `origin main` (like `git push origin main`) vs `origin/main`
github terms people mentioned being confused by:
* “pull request” (vs “merge request” in gitlab which folks seemed to think was clearer)
* what “squash and merge” and “rebase and merge” do (Id never actually heard of `git merge --squash` until yesterday, I thought “squash and merge” was a special github feature)
### its genuinely “every git term”
I was surprised that basically every other core feature of git was mentioned by at least one person as being confusing in some way. Id be interested in hearing more examples of confusing git terms that I missed too.
Theres another great post about this from 2012 called [the most confusing git terminology][27]. It talks more about how gits terminology relates to CVS and Subversions terminology.
If I had to pick the 3 most confusing git terms, I think right now Id pick:
* a `head` is a branch, `HEAD` is the current branch
* “remote tracking branch” and “branch that tracks a remote” being different things
* how “index”, “staged”, “cached” all refer to the same thing
### thats all!
I learned a lot from writing this I learned a few new facts about git, but more importantly I feel like I have a slightly better sense now for what someone might mean when they say that everything in git is confusing.
I really hadnt thought about a lot of these issues before like Id never realized how “tracking” is used in such a weird way when discussing branches.
Also as usual I might have made some mistakes, especially since I ended up in a bunch of corners of git that I hadnt visited before.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2023/11/01/confusing-git-terminology/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://social.jvns.ca/@b0rk/111330564535454510
[2]: tmp.MK9dzkPKFA#head-and-heads
[3]: tmp.MK9dzkPKFA#detached-head-state
[4]: tmp.MK9dzkPKFA#ours-and-theirs-while-merging-or-rebasing
[5]: tmp.MK9dzkPKFA#your-branch-is-up-to-date-with-origin-main
[6]: tmp.MK9dzkPKFA#head-head-head-head-head-2-head-2
[7]: tmp.MK9dzkPKFA#and
[8]: tmp.MK9dzkPKFA#can-be-fast-forwarded
[9]: tmp.MK9dzkPKFA#reference-symbolic-reference
[10]: tmp.MK9dzkPKFA#refspecs
[11]: tmp.MK9dzkPKFA#tree-ish
[12]: tmp.MK9dzkPKFA#index-staged-cached
[13]: tmp.MK9dzkPKFA#reset-revert-restore
[14]: tmp.MK9dzkPKFA#untracked-files-remote-tracking-branch-track-remote-branch
[15]: tmp.MK9dzkPKFA#checkout
[16]: tmp.MK9dzkPKFA#reflog
[17]: tmp.MK9dzkPKFA#merge-vs-rebase-vs-cherry-pick
[18]: tmp.MK9dzkPKFA#rebase-onto
[19]: tmp.MK9dzkPKFA#commit
[20]: tmp.MK9dzkPKFA#more-confusing-terms
[21]: https://git-scm.com/docs/gitglossary
[22]: https://nitaym.github.io/ourstheirs/
[23]: https://git-scm.com/docs/git-range-diff
[24]: https://matthew-brett.github.io/pydagogue/pain_in_dots.html
[25]: https://git-scm.com/docs/revisions
[26]: https://gitster.livejournal.com/39629.html
[27]: https://longair.net/blog/2012/05/07/the-most-confusing-git-terminology/

View File

@ -0,0 +1,145 @@
[#]: subject: "Fedora Linux Flatpak cool apps to try for November"
[#]: via: "https://fedoramagazine.org/fedora-linux-flatpak-cool-apps-to-try-for-november/"
[#]: author: "Eduard Lucena https://fedoramagazine.org/author/x3mboy/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora Linux Flatpak cool apps to try for November
======
![][1]
Image by Daimar Stein
This article introduces projects available in Flathub with installation instructions.
[Flathub][2] is the place to get and distribute apps for all of Linux. It is powered by Flatpak, allowing Flathub apps to run on almost any Linux distribution.
Please read “[Getting started with Flatpak][3]“. In order to enable flathub as your flatpak provider, use the instructions on the [flatpak site][4].
### TurboWarp
[TurboWarp][5] is a modified version of Scratch. Scratch is a coding language with a simple visual interface that allows young people to create digital stories, games, and animations.
I love Scratch, but since I discovered TurboWarp, my son has never looked back. The interface is clearer, it has night mode, it works faster than the original Scratch, and its memory optimized.
You can install “TurboWarp” by clicking the install button on the web site or manually using this command:
```
flatpak install flathub org.turbowarp.TurboWarp
```
### Szyszka
[Szyska][6] is file renamer with a lot of interesting features like:
* Great performance
* Available for Linux, Mac and Windows
* GUI created with GTK 4
* Multiple rules which can be freely combined:
* Replace text
* Trim text
* Add text
* Add numbers
* Purge text
* Change letters to upper/lowercase
* Custom rules
* Saved rules to be used later
* Ability to edit, reorder rules and results
* Handles hundreds thousands of records
You can install “Szyszka” by clicking the install button on the web site or manually using this command:
```
flatpak install flathub com.github.qarmin.szyszka
```
### Marker
[Marker][7] is a MarkDown editor written in GTK3. Its one of my favorites for fast writing on GTK. Some of its features are:
* HTML and LaTeX conversion of markdown documents with [scidown][8]
* Support for YAML headers
* Document classes
* Beamer/presentation mode
* Abstract sections
* Table of Contents
* External document inclusion
* Equations, figures, table and listings with reference id and caption
* Internal references
* TeX math rendering with [KaTeX][9] or [MathJax][10]
* Syntax highlighting for code blocks with [highlight.js][11]
* Flexible export options with [pandoc][12]
* PDF
* RTF
* ODT
* DOCX
You can install “Marker” by clicking the install button on the web site or manually using this command:
```
flatpak install flathub com.github.fabiocolacio.marker
```
_**Marker is also available as rpm on fedoras repositories**_
### Librum
[Librum][13] is an application to manage your library and read your e-books. Its a great way to manage a collection of books and documents, including support for a long list of formats. Some of its features are:
* A modern e-reader
* A personalized and customizable library
* Book meta-data editing
* A free in-app bookstore with more than 70,000 books
* Book syncing across all of your devices
* Highlighting Bookmarking Text search
You can install “Librum” by clicking the install button on the web site or manually using this command:
```
flatpak install flathub com.librumreader.librum
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-linux-flatpak-cool-apps-to-try-for-november/
作者:[Eduard Lucena][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/x3mboy/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2023/10/flatpak-apps-for-november-banner-816x345.jpg
[2]: https://flathub.org
[3]: https://fedoramagazine.org/getting-started-flatpak/
[4]: https://flatpak.org/setup/Fedora
[5]: https://flathub.org/apps/org.turbowarp.TurboWarp
[6]: https://flathub.org/apps/com.github.qarmin.szyszka
[7]: https://flathub.org/apps/com.github.fabiocolacio.marker
[8]: https://github.com/wallberg13/scidown
[9]: https://katex.org/
[10]: https://www.mathjax.org/
[11]: https://highlightjs.org/
[12]: https://pandoc.org/
[13]: https://flathub.org/apps/com.librumreader.librum

View File

@ -0,0 +1,150 @@
[#]: subject: "8 Websites Linux Users Should Have bookmarked"
[#]: via: "https://itsfoss.com/useful-linux-websites/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
8 Websites Linux Users Should Have bookmarked
======
Considering you already follow us, we have your back for the most essential Linux requirements.
However, when it comes to Linux, there is always something to learn, even for all the Linux experts out there. 👨‍💻👩‍💻
So, there are some websites and blogs that are helpful for both newbies and experienced Linux users.
Let me list some of the best options for you to bookmark.
### 1\. ArchWiki
![][1]
[ArchWiki][2] is a platform for all kinds of information. Whether it is about a tool, a security technology, an installer, a desktop environment, or anything else, you can find insights about it on ArchWiki.
Technically, it serves as the documentation portal for the Arch Linux distribution. However, you can find tutorials, guides, FAQs, and other essential information regarding numerous things to help you even if you do not use Arch Linux.
The information is well-presented, thoroughly reviewed/updated, and easy to read.
### 2\. Explainshell
![][3]
[Explainshell][4] is an interesting portal that helps you identify the arguments used in a command in one go.
Usually, you search for manpages or information on commands separately. Explainshell should accelerate the process to get you the information needed along with the link to its manpage.
Primarily, it displays information sourced from Ubuntu's manpage repository. So, whether you are installing a software, working on a Git commit, connecting to SSH, you can break down all the commands using Explainshell.
### 3\. Crontab.guru
![][5]
If you create [cron jobs][6] and schedules to automate things, [Crontab.guru][7] is a handy website.
You can just enter the expression that you intend to use in your cron job, and get details if it will work as you expect. For correct expressions, it will reflect the schedule you want to set with the cron job.
As a bonus, even if you are new to cron jobs, it will highlight which is the month/day/week field in the editor.
If it looks correct to you, proceed or edit it to rectify and use it.
**Suggested Read 📖**
![][8]
### 4\. Distrowatch
![][9]
[Distrowatch][10] is one portal every Linux user may already know. It is popular for listing trending Linux distributions. Some even consider the popularity chart to see if their favorite distribution ranks higher than others.
You can get updates on the latest distribution releases, ones that you may not have heard ever before, along with a summarized changelog for new releases.
If you subscribe to its newsletter, they also publish distro reviews and cover some development news. For users looking to keep up on the latest distributions, this is your bookmark.
### 5\. Phoronix
![][11]
[Phoronix][12] is one of the oldest Linux websites out there with the best hardware-focused content.
Whether you are looking for a benchmark on Linux with the latest processor or a distribution's performance, Phoronix has it. You also get a regular dose of news and development updates in the Linux world as an extra.
### 6\. Ubuntu Blog
![][13]
Canonical's blog is all about Ubuntu, its developments, enterprise updates, and other technological advances.
If you want to keep up with everything around Ubuntu, the [Ubuntu Blog][14] is the best place to have bookmarked. Whether you are an IoT enthusiast, or a robotics engineer making use of Ubuntu, there's always something happening.
And, to be honest, you can never get all these updates from any particular blog considering Ubuntu is everywhere.
**Suggested Read 📖**
![][8]
### 7\. GamingOnLinux
![][15]
While we do cover some gaming updates and have a [gaming guide][16] for you, [GamingOnLinux][17] is the ultimate portal for everything on Linux gaming and Steam Deck.
Whether it is about a development change, a new game, SteamOS releases, SteamVR, or a sale that could matter to Linux users, you can find all about it.
### 8\. /r/Linux on Reddit
![][18]
Even though Reddit is no longer the place it used to be, the Subreddits are still worth a follow.
The [Linux subreddit][19] is a community to bookmark for the latest happenings in the open-source and Linux universe. You may not find the fellow Redditors as friendly as one would expect, but as long as you want to keep an eye on updates, a bookmark suits it.
### What Do I Keep Bookmarked?
I love **Phoronix's hardware insights** and **Distrowatch's** updates on newer distro projects. So, those two websites are always in my bookmark list.
What should you have bookmarked?
Well, if you are a desktop user using Ubuntu, **Canonical's blog** should be great to keep up with newer releases and explanations on newer features. For a gamer, **GamingOnLinux** is a one-stop portal.
In case you are always curious and want to know how things work/what it is - **ArchWiki** should be your go-to reference.
Of course, for all things Linux, we try our best not to disappoint you. So, do not forget to bookmark us as well!😉
_💬 What are your favorite websites to bookmark? Let me know in the comments down below!_
--------------------------------------------------------------------------------
via: https://itsfoss.com/useful-linux-websites/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/content/images/2023/10/archwiki-archlinux.jpg
[2]: https://wiki.archlinux.org/
[3]: https://itsfoss.com/content/images/2023/10/explainshell.jpg
[4]: https://explainshell.com/
[5]: https://itsfoss.com/content/images/2023/10/crontab-guru.jpg
[6]: https://itsfoss.com/cron-job/
[7]: https://crontab.guru/
[8]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[9]: https://itsfoss.com/content/images/2023/10/distrowatch-screenshot.jpg
[10]: https://distrowatch.com/
[11]: https://itsfoss.com/content/images/2023/10/phoronix.jpg
[12]: https://www.phoronix.com/
[13]: https://itsfoss.com/content/images/2023/10/ubuntu-blog.jpg
[14]: https://ubuntu.com/blog
[15]: https://itsfoss.com/content/images/2023/10/gamingonlinux.jpg
[16]: https://itsfoss.com/linux-gaming-guide/
[17]: https://gamingonlinux.com/
[18]: https://itsfoss.com/content/images/2023/10/linux-subreddit.jpg
[19]: https://www.reddit.com/r/linux/

View File

@ -0,0 +1,320 @@
[#]: subject: "git rebase: what can go wrong?"
[#]: via: "https://jvns.ca/blog/2023/11/06/rebasing-what-can-go-wrong-/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
git rebase: what can go wrong?
======
Hello! While talking with folks about Git, Ive been seeing a comment over and over to the effect of “I hate rebase”. People seemed to feel pretty strongly about this, and I was really surprised because I dont run into a lot of problems with rebase and I use it all the time.
Ive found that if many people have a very strong opinion thats different from mine, usually its because they have different experiences around that thing from me.
So I asked on [Mastodon][1]:
> today Im thinking about the tradeoffs of using `git rebase` a bit. I think the goal of rebase is to have a nice linear commit history, which is something I like.
>
> but what are the _costs_ of using rebase? what problems has it caused for you in practice? Im really only interested in specific bad experiences youve had here not opinions or general statements like “rewriting history is bad”
I got a huge number of incredible answers to this, and Im going to do my best to summarize them here. Ill also mention solutions or workarounds to those problems in cases where I know of a solution. Heres the list:
* [fixing the same conflict repeatedly is annoying][2]
* [rebasing a lot of commits is hard][3]
* [undoing a rebase is hard][4]
* [force pushing to shared branches can cause lost work][5]
* [force pushing makes code reviews harder][6]
* [losing commit metadata][7]
* [rebasing can break intermediate commits][8]
* [accidentally run git commit amend instead of git rebase continue][9]
* [splitting commits in an interactive rebase is hard][10]
* [complex rebases are hard][11]
* [rebasing long lived branches can be annoying][12]
* [rebase and commit discipline][13]
* [a “squash and merge” workflow][14]
* [miscellaneous problems][15]
My goal with this isnt to convince anyone that rebase is bad and you shouldnt use it (Im certainly going to keep using rebase!). But seeing all these problems made me want to be more cautious about recommending rebase to newcomers without explaining how to use it safely. It also makes me wonder if theres an easier workflow for cleaning up your commit history thats harder to accidentally mess up.
### my git workflow assumptions
First, I know that people use a lot of different Git workflows. Im going to be talking about the workflow Im used to when working on a team, which is:
* the team uses a central Github/Gitlab repo to coordinate
* theres one central `main` branch. Its protected from force pushes.
* people write code in feature branches and make pull requests to `main`
* The web service is deployed from `main` every time a pull request is merged.
* the only way to make a change to `main` is by making a pull request on Github/Gitlab and merging it
This is not the only “correct” git workflow (its a very “we run a web service” workflow and open source project or desktop software with releases generally use a slightly different workflow). But its what I know so thats what Ill talk about.
### two kinds of rebase
Also before we start: one big thing I noticed is that there were 2 different kinds of rebase that kept coming up, and only one of them requires you to deal with merge conflicts.
1. **rebasing on an ancestor** , like `git rebase -i HEAD^^^^^^^` to squash many small commits into one. As long as youre just squashing commits, youll never have to resolve a merge conflict while doing this.
2. **rebasing onto a branch that has diverged** , like `git rebase main`. This can cause merge conflicts.
I think its useful to make this distinction because sometimes Im thinking about rebase type 1 (which is a lot less likely to cause problems), but people who are struggling with it are thinking about rebase type 2.
Now lets move on to all the problems!
### fixing the same conflict repeatedly is annoying
If you make many tiny commits, sometimes you end up in a hellish loop where you have to fix the same merge conflict 10 times. You can also end up fixing merge conflicts totally unnecessarily (like dealing with a merge conflict in code that a future commit deletes).
There are a few ways to make this better:
* first do a `git rebase -i HEAD^^^^^^^^^^^` to squash all of the tiny commits into 1 big commit and then a `git rebase main` to rebase onto a different branch. Then you only have to fix the conflicts once.
* use `git rerere` to automate repeatedly resolving the same merge conflicts (“rerere” stands for “reuse recorded resolution”, itll record your previous merge conflict resolutions and replay them). Ive never tried this but I think you need to set `git config rerere.enabled true` and then itll automatically help you.
Also if I find myself resolving merge conflicts more than once in a rebase, Ill usually run `git rebase --abort` to stop it and then squash my commits into one and try again.
### rebasing a lot of commits is hard
Generally when Im doing a rebase onto a different branch, Im rebasing 1-2 commits. Maybe sometimes 5! Usually there are no conflicts and it works fine.
Some people described rebasing hundreds of commits by many different people onto a different branch. That sounds really difficult and I dont envy that task.
### undoing a rebase is hard
I heard from several people that when they were new to rebase, they messed up a rebase and permanently lost a week of work that they then had to redo.
The problem here is that undoing a rebase that went wrong is **much** more complicated than undoing a merge that went wrong (you can undo a bad merge with something like `git reset --hard HEAD^`). Many newcomers to rebase dont even realize that undoing a rebase is even possible, and I think its pretty easy to understand why.
That said, it is possible to undo a rebase that went wrong. Heres an example of how to undo a rebase using `git reflog`.
**step 1** : Do a bad rebase (for example run `git rebase -I HEAD^^^^^` and just delete 3 commits)
**step 2** : Run `git reflog`. You should see something like this:
```
ee244c4 (HEAD -> main) HEAD@{0}: rebase (finish): returning to refs/heads/main
ee244c4 (HEAD -> main) HEAD@{1}: rebase (pick): test
fdb8d73 HEAD@{2}: rebase (start): checkout HEAD^^^^^^^
ca7fe25 HEAD@{3}: commit: 16 bits by default
073bc72 HEAD@{4}: commit: only show tooltips on desktop
```
**step 3** : Find the entry immediately before `rebase (start)`. In my case thats `ca7fe25`
**step 4** : Run `git reset --hard ca7fe25`
Another solution folks mentioned to “undoing a rebase is hard” that avoids having to use the reflog is to make a “backup branch” with `git switch -c backup` before rebasing, so you can easily get back to the old commit.
### force pushing to shared branches can cause lost work
A few people mentioned the following situation:
1. Youre collaborating on a branch with someone
2. You push some changes
3. They rebase the branch and run `git push --force` (maybe by accident)
4. Now when you run `git pull`, its a mess you get the a `fatal: Need to specify how to reconcile divergent branches` error
5. While trying to deal with the fallout you might lose some commits, especially if some of the people are involved arent very comfortable with git
This is an even worse situation than the “undoing a rebase is hard” situation because the missing commits might be split across many different peoples and the only worse thing than having to hunt through the reflog is multiple different people having to hunt through the reflog.
This has never happened to me because the only branch Ive ever collaborated on is `main`, and `main` has always been protected from force pushing (in my experience the only way you can get something into `main` is through a pull request). So Ive never even really been in a situation where this _could_ happen. But I can definitely see how this would cause problems.
The main tools I know to avoid this are:
* dont rebase on shared branches
* use `--force-with-lease` when force pushing, to make sure that nobody else has pushed to the branch since you last push
I was curious about why people would run `git push --force` on a shared branch. Some reasons people gave were:
* theyre working on a collaborative feature branch, and the feature branch needs to be rebased onto `main`. The idea here is that youre just really careful about coordinating the rebase so nothing gets lost.
* as an open source maintainer, sometimes they need to rebase a contributors branch to fix a merge conflict
* theyre new to git, read some instructions online that suggested `git rebase` and `git push --force` as a solution, and followed them without understanding the consequences
* theyre used to doing `git push --force` on a personal branch and ran it on a shared branch by accident
### force pushing makes code reviews harder
The situation here is:
* You make a pull request on GitHub
* People leave some comments
* You update the code to address the comments, rebase to clean up your commits, and force push
* Now when the reviewer comes back, its hard for them to tell what you changed since the last time you saw it all the commits show up as “new”.
One way to avoid this is to push new commits addressing the review comments, and then after the PR is approved do a rebase to reorganize everything.
I think some reviewers are more annoyed by this problem than others, its kind of a personal preference. Also this might be a Github-specific issue, other code review tools might have better tools for managing this.
### losing commit metadata
If youre rebasing to squash commits, you can lose important commit metadata like `Co-Authored-By`. Also if you GPG sign your commits, rebase loses the signatures.
Theres probably other commit metadata that you can lose that Im not thinking of.
I havent run into this one so Im not sure how to avoid it. I think GPG signing commits isnt as popular as it used to be.
### rebasing can break intermediate commits
If youre trying to have a very clean commit history where the tests pass on every commit (very admirable!), rebasing can result in some intermediate commits that are broken and dont pass the tests, even if the final commit passes the tests.
Apparently you can avoid this by using `git rebase -x` to run the test suite at every step of the rebase and make sure that the tests are still passing. Ive never done that though.
### accidentally run `git commit --amend` instead of `git rebase --continue`
A couple of people mentioned issues with running `git commit --amend` instead of `git rebase --continue` when resolving a merge conflict.
The reason this is confusing is that there are two reasons when you might want to edit files during a rebase:
1. editing a commit (by using `edit` in `git rebase -i`), where you need to write `git commit --amend` when youre done
2. a merge conflict, where you need to run `git rebase --continue` when youre done
Its very easy to get these two cases mixed up because they feel very similar. I think what goes wrong here is that you:
* Start a rebase
* Run into a merge conflict
* Resolve the merge conflict, and run `git add file.txt`
* Run `git commit` because thats what youre used to doing after you run `git add`
* But you were supposed to run `git rebase --continue`! Now you have a weird extra commit, and maybe it has the wrong commit message and/or author
### splitting commits in an interactive rebase is hard
The whole point of rebase is to clean up your commit history, and **combining** commits with rebase is pretty easy. But what if you want to split up a commit into 2 smaller commits? Its not as easy, especially if the commit you want to split is a few commits back! I actually dont really know how to do it even though I feel very comfortable with rebase. Id probably just do `git reset HEAD^^^` or something and use `git add -p` to redo all my commits from scratch.
One person shared [their workflow for splitting commits with rebase][16].
### complex rebases are hard
If you try to do too many things in a single `git rebase -i` (reorder commits AND combine commits AND modify a commit), it can get really confusing.
To avoid this, I personally prefer to only do 1 thing per rebase, and if I want to do 2 different things Ill do 2 rebases.
### rebasing long lived branches can be annoying
If your branch is long-lived (like for 1 month), having to rebase repeatedly gets painful. It might be easier to just do 1 merge at the end and only resolve the conflicts once.
The dream is to avoid this problem by not having long-lived branches but it doesnt always work out that way in practice.
### miscellaneous problems
A few more issues that I think are not that common:
* **Stopping a rebase wrong** : If you try to abort a rebase thats going badly with `git reset --hard` instead of `git rebase --abort`, things will behave weirdly until you stop it properly
* **Weird interactions with merge commits** : A couple of quotes about this: “If you rebase your working copy to keep a clean history for a branch, but the underlying project uses merges, the result can be ugly. If you do rebase -i HEAD~4 and the fourth commit back is a merge, you can see dozens of commits in the interactive editor.“, “Ive learned the hard way to _never_ rebase if Ive merged anything from another branch”
### rebase and commit discipline
Ive seen a lot of people arguing about rebase. Ive been thinking about why this is and Ive noticed that people work at a few different levels of “commit discipline”:
1. Literally anything goes, “wip”, “fix”, “idk”, “add thing”
2. When you make a pull request (on github/gitlab), squash all of your crappy commits into a single commit with a reasonable message (usually the PR title)
3. Atomic Beautiful Commits every change is split into the appropriate number of commits, where each one has a nice commit message and where they all tell a story around the change youre making
Often I think different people inside the same company have different levels of commit discipline, and Ive seen people argue about this a lot. Personally Im mostly a Level 2 person. I think Level 3 might be what people mean when they say “clean commit history”.
I think Level 1 and Level 2 are pretty easy to achieve without rebase for level 1, you dont have to do anything, and for level 2, you can either press “squash and merge” in github or run `git switch main; git merge --squash mybranch` on the command line.
But for Level 3, you either need rebase or some other tool (like GitUp) to help you organize your commits to tell a nice story.
Ive been wondering if when people argue about whether people “should” use rebase or not, theyre really arguing about which minimum level of commit discipline should be required.
I think how this plays out also depends on how big the changes folks are making if folks are usually making pretty small pull requests anyway, squashing them into 1 commit isnt a big deal, but if youre making a 6000-line change you probably want to split it up into multiple commits.
### a “squash and merge” workflow
A couple of people mentioned using this workflow that doesnt use rebase:
* make commits
* Run `git merge main` to merge main into the branch periodically (and fix conflicts if necessary)
* When youre done, use GitHubs “squash and merge” feature (which is the equivalent of running `git checkout main; git merge --squash mybranch`) to squash all of the changes into 1 commit. This gets rid of all the “ugly” merge commits.
I originally thought this would make the log of commits on my branch too ugly, but apparently `git log main..mybranch` will just show you the changes on your branch, like this:
```
$ git log main..mybranch
756d4af (HEAD -> mybranch) Merge branch 'main' into mybranch
20106fd Merge branch 'main' into mybranch
d7da423 some commit on my branch
85a5d7d some other commit on my branch
```
Of course, the goal here isnt to **force** people who have made beautiful atomic commits to squash their commits its just to provide an easy option for folks to clean up a messy commit history (“add new feature; wip; wip; fix; fix; fix; fix; fix;“) without having to use rebase.
Id be curious to hear about other people who use a workflow like this and if it works well.
### there are more problems than I expected
I went into this really feeling like “rebase is fine, what could go wrong?” But many of these problems actually have happened to me in the past, its just that over the years Ive learned how to avoid or fix all of them.
And Ive never really seen anyone share best practices for rebase, other than “never force push to a shared branch”. All of these honestly make me a lot more reluctant to recommend using rebase.
To recap, I think these are my personal rebase rules I follow:
* stop a rebase if its going badly instead of letting it finish (with `git rebase --abort`)
* know how to use `git reflog` to undo a bad rebase
* dont rebase a million tiny commits (instead do it in 2 steps: `git rebase -i HEAD^^^^` and then `git rebase main`)
* dont do more than one thing in a `git rebase -i`. Keep it simple.
* never force push to a shared branch
* never rebase commits that have already been pushed to `main`
Thanks to Marco Rogers for encouraging me to think about the problems people have with rebase, and to everyone on Mastodon who helped with this.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2023/11/06/rebasing-what-can-go-wrong-/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://social.jvns.ca/@b0rk/111342083852635579
[2]: tmp.kca9jYlNEk#fixing-the-same-conflict-repeatedly-is-annoying
[3]: tmp.kca9jYlNEk#rebasing-a-lot-of-commits-is-hard
[4]: tmp.kca9jYlNEk#undoing-a-rebase-is-hard
[5]: tmp.kca9jYlNEk#force-pushing-to-shared-branches-can-cause-lost-work
[6]: tmp.kca9jYlNEk#force-pushing-makes-code-reviews-harder
[7]: tmp.kca9jYlNEk#losing-commit-metadata
[8]: tmp.kca9jYlNEk#rebasing-can-break-intermediate-commits
[9]: tmp.kca9jYlNEk#accidentally-run-git-commit-amend-instead-of-git-rebase-continue
[10]: tmp.kca9jYlNEk#splitting-commits-in-an-interactive-rebase-is-hard
[11]: tmp.kca9jYlNEk#complex-rebases-are-hard
[12]: tmp.kca9jYlNEk#rebasing-long-lived-branches-can-be-annoying
[13]: tmp.kca9jYlNEk#rebase-and-commit-discipline
[14]: tmp.kca9jYlNEk#a-squash-and-merge-workflow
[15]: tmp.kca9jYlNEk#miscellaneous-problems
[16]: https://github.com/kimgr/git-rewrite-guide#split-a-commit

View File

@ -0,0 +1,225 @@
[#]: subject: "Rename Files and Directories in Linux Command Line"
[#]: via: "https://itsfoss.com/linux-rename-files-directories/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Rename Files and Directories in Linux Command Line
======
How do you rename files and directories in the Linux terminal? You use the mv command.
Yes, the same mv command which is used for 'moving' files and folders from one location to another.
You can simply specify the new name for the files and directories while 'moving them'.
To rename a file, use:
```
mv old_file new_file
```
Similarly, to rename a directory, use:
```
mv old_dir new_dir
```
Sounds easy, right? But I'll discuss renaming of files in detail here:
* Show you practical examples of renaming
* Show example of bulk renaming multiple files by combining the find and exec command
* Discuss a dedicated rename utility for batch renaming files
Let's see it one by one.
### Renaming files and directories with mv command
Use the mv command to rename a file in the same directory:
```
mv file1.txt file2.txt
```
Similarly, you can rename a directory in the same location:
```
mv dir1 dir2
```
Here's an example where I rename a file and a directory:
![][1]
As you can see, unlike the [cp command][2], you don't have to use the recursive option for handling directories with [mv command][3].
🚧
If you trying renaming the file with the same name, you'll see an error (obviously).
You may also rename a file while moving it to another location:
```
mv old-file-name another_dir/new-file-name
```
In the example below, I moved the file named `firefox-quiz.txt` to the sample directory. And while doing that, I renamed it `quiz.txt`.
![][4]
I think of it as the cut-paste operation.
💡
While you can move multiple files to another location (mv file1 file2 file2 dir), you CANNOT rename multiple files with mv. For that, you have to employ other tactics that I discuss in the following sections.
### Renaming multiple files matching a pattern by combining mv, find and exec commands
🚧
Be extra careful while batch renaming files like these. One wrong move and you'll end up with undesired result that can not be undone.
The find command is used for finding files in the given directory based on their name, type, modification time and other parameters. The [exec command is combined with find][5] to execute commands on the result of the find command.
There is no set, standard structure to use find, exec and mv commands. You can combine them as per your need.
Let's say you want to rename all the files ending with `.txt` in the current directory by adding `_old` in its name. So `file_1.txt` becomes `file_1.txt_old` etc.
```
find . -type f -name "*.txt" -exec mv {} {}_old ;
```
![][6]
This is just an example and your renaming requirements could be different. Also, **the above works with filenames without spaces only**.
**Pro Tip** : When dealing with bulk actions like this, you can smartly use the echo command to see what action will be performed instead of actually performing it. If it looks alright, then go with the actual action.
For example, first see what files will be renamed:
```
find . -type f -name "*.txt" -exec echo mv {} {}_old \;
```
![][7]
As you can see, no files were actually renamed. But you get to see what command will be the action if you run the above command without echo.
If it looks alright to you, remove the echo command and proceed with actual renaming.
```
find . -type f -name "*.txt" -exec mv {} {}_old \;
```
I learned this trick in the Efficient Linux at the Command Line book. An excellent book filled with small gems like this. No wonder it has become one of [my favorite Linux books][8].
![][9]
##### New Book: Efficient Linux at the Command Line
Pretty amazing Linux book with lots of practical tips. It fills in the gap, even for experienced Linux users. Must have in your collection.
[Get it from Amazon][10]
### Renaming multiple files easily with the rename command
There is a handy command line utility called rename which could be used for batch renaming files based on the given Perl regex pattern.
This utility is not party of GNU toolchain and neither it comes preinstalled. So you have to use your distribution's package manager to install it first.
For Debian/Ubuntu, the command would be:
```
sudo apt install rename
```
You can use it in the following manner:
```
rename [options] perl_regex [files]
```
The options are:
* -v : Verbose mode
* -n : No action, show the files that would be renamed but dont rename them
* -o : No overwrite
* -f : Force overwrite existing files
* -s : Don't rename the soft link but its target
Now, let's take the same example that you saw in the previous section. Renaming the *.txt to .txt_old.
```
rename 's/\.txt$/.txt_old/' **
```
I am not going to explain the regex here. The `**` means to look into all files in all subdirectories.
![][11]
And as you can see, it works as expected.
### Conclusion
I hope you liked this tip that helps you learn to do basic tasks in the Linux command line. Of course, it is for those who want to learn and use the command line. Desktop users always have the GUI tools for such tasks.
If you are absolutely new to Linux commands, this series will help you a great deal.
![][12]
Let me know if you have questions or suggestions.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-rename-files-directories/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/content/images/2023/11/renaming-file-directory-linux-command-line.png
[2]: https://itsfoss.com/cp-command/
[3]: https://linuxhandbook.com/mv-command/
[4]: https://itsfoss.com/content/images/2023/11/rename-file-while-moving-another-location.png
[5]: https://linuxhandbook.com/find-exec-command/
[6]: https://itsfoss.com/content/images/2023/11/bulk-renaming-files-linux-1.png
[7]: https://itsfoss.com/content/images/2023/11/use-echo-for-dry-run-renaming-files.png
[8]: https://itsfoss.com/best-linux-books/
[9]: https://itsfoss.com/content/images/2023/04/efficient-at-linux-command-line-horizontal.png
[10]: https://amzn.to/3MPjiHw
[11]: https://itsfoss.com/content/images/2023/11/use-rename-command-linux.png
[12]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png

View File

@ -0,0 +1,145 @@
[#]: subject: "How to rebase to Fedora Linux 39 on Silverblue"
[#]: via: "https://fedoramagazine.org/how-to-rebase-to-fedora-linux-39-on-silverblue/"
[#]: author: "Michal Konečný https://fedoramagazine.org/author/zlopez/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to rebase to Fedora Linux 39 on Silverblue
======
![][1]
Fedora Silverblue is [an operating system for your desktop built on Fedora Linux][2]. Its excellent for daily use, development, and container-based workflows. It offers [numerous advantages][3] such as being able to roll back in case of any problems. If you want to update or rebase to Fedora Linux 39 on your Fedora Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.
### Update your existing system
Prior to actually doing the rebase to Fedora Linux 39, you should apply any pending updates. Enter the following in the terminal:
```
$ rpm-ostree update
```
or install updates through GNOME Software and reboot.
### Rebasing using GNOME Software
GNOME Software shows you that there is new version of Fedora Linux available on the Updates screen.
First thing you need to do is download the new image, so click on the _Download_ button. This will take some time. When its done you will see that the update is ready to install.
Click on the _Restart & Upgrade_ button. This step will take only a few moments and the computer will be restarted when the update is completed. After the restart you will end up in new and shiny release of Fedora Linux 39. Easy, isnt it?
### Rebasing using terminal
If you prefer to do everything in a terminal, then this part of the guide is for you.
Rebasing to Fedora Linux 39 using the terminal is easy. First, check if the 39 branch is available:
```
$ ostree remote refs fedora
```
You should see the following in the output:
```
fedora:fedora/39/x86_64/silverblue
```
If you want to pin the current deployment (meaning that this deployment will stay as an option in GRUB until you remove it), you can do it by running:
```
# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0
```
To remove the pinned deployment use the following command:
```
# 2 is entry position in rpm-ostree status
$ sudo ostree admin pin --unpin 2
```
Next, rebase your system to the Fedora Linux 39 branch.
```
$ rpm-ostree rebase fedora:fedora/39/x86_64/silverblue
```
Finally, the last thing to do is restart your computer and boot to Fedora Linux 39.
### How to roll back
If anything bad happens (for instance, if you cant boot to Fedora Linux 39 at all) its easy to go back. At boot time, pick the entry in the GRUB menu for the version prior to Fedora Linux 39 and your system will start in that previous version rather than Fedora Linux 39. If you dont see the GRUB menu, try to press ESC during boot. To make the change to the previous version permanent, use the following command:
```
$ rpm-ostree rollback
```
Thats it. Now you know how to rebase Fedora Silverblue to Fedora Linux 39 and roll back. So why not do it today?
### FAQ
Because there are similar questions in comments for each article about rebasing to newer version of Silverblue I will try to answer them in this section.
**Question: Can I skip versions during rebase of Fedora? For example from Fedora 37 Silverblue to Fedora 39 Silverblue?**
Answer: Although it could be sometimes possible to skip versions during rebase, it is not recommended. You should always update to one version above (38->39 for example) to avoid unnecessary errors.
**Question: I have[rpm-fusion][4] layered and I got errors during rebase. How should I do the rebase?**
Answer: If you have [rpm-fusion][4] layered on your Silverblue installation, you should do the following before rebase (this is a single line command, beware the wrapping):
```
rpm-ostree update --uninstall rpmfusion-free-release --uninstall rpmfusion-nonfree-release --install rpmfusion-free-release --install rpmfusion-nonfree-release
```
After doing this you can follow the guide in this blog post.
**Question: Could this guide be used for another ostree editions (Kinoite, Sericea)?**
Yes, you can follow the _Rebasing using the terminal_ part of this guide for every ostree edition of Fedora. Just use the corresponding branch. For example for Kinoite use
fedora:fedora/39/x86_64/kinoite
instead of
fedora:fedora/39/x86_64/silverblue
.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-to-rebase-to-fedora-linux-39-on-silverblue/
作者:[Michal Konečný][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zlopez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/silverblue-rebase-816x345.jpg
[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/
[3]: https://fedoramagazine.org/give-fedora-silverblue-a-test-drive/
[4]: https://rpmfusion.org/

View File

@ -0,0 +1,276 @@
[#]: subject: "Tracking Changes and Version Management with LibreOffice"
[#]: via: "https://itsfoss.com/libreoffice-version-control/"
[#]: author: "Sreenath https://itsfoss.com/author/sreenath/"
[#]: collector: "lujun9972/lctt-scripts-1693450080"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Tracking Changes and Version Management with LibreOffice
======
LibreOffice, the free and open-source office suite comes with a handy collaborative edit feature, which records changes to a document.
It lets you view the changes, comment on it, approve or reject changes etc. You can find this feature handy if **multiple users (or a workgroup) utilize LibreOffice Writer** or Spreadsheets.
So, how can you use this feature? What do you have to do to track changes and versions of a document meant for collaboration?
Considering you already have the [latest LibreOffice installed][1], let me tell you about it here.
📋
For real-time collaboration, you need LibreOffice online (managed or your own hosted solution). We are not discussing that here.
### Enabling to Record Changes
By default, the feature is not enabled. So, first, you need to enable it.
Open LibreOffice and go to **Edit → Track Changes → Record**.
![Toggle Record feature][2]
Now, you need to enable the Track changes toolbar, for convenience. For this, go to **View → Toolbars → Track Changes**.
![Enable Track Changes][3]
You can see a small toolbar appeared on the bottom.
![Track Changes Toolbar][4]
You can use these buttons to manage the changes. Options like accept/reject/ changes, adding comments, and comparing versions can be found here.
### Add User Data
Before you start working with recording changes, you must add User Data, to identify changes. Without this, any change made will be marked as something done by “Unknown user”.
First, go to **Tools → Options**.
![Click on Options][5]
Here, inside the “ **User Data** ” section, add your **name** , **address** , **email** etc., if you prefer, but **name is a must**.
![Enter User Data][6]
That's it. You are good to go.
**Suggested Read 📖**
![][7]
### Working with Recording Changes
Now, let us take a look at how all of this works:
#### Locating Changes
First, when you add a new word to the document, it appears in a yellow text.
![Changes are recorded in LibreOffice][8]
As, you can see from the above screenshot, when a word is removed, it is not deleted from the document. Instead, it is marked with a strikeout annotation. When you add another word in its place, that also appears highlighted and underlined.
You can notice a small bar on the extreme left of the lines that include some kind of change, **even if it is a small comma addition**.
#### Knowing about the changes
Now that you have located the changes appearing to the document, what about the author who made the change?
LibreOffice also notes the author (user) who has made the change to a particular document. You can get this detail through several places.
Hover over the marked text, to know about that particular change. It will show the author, changed date and time.
![Author of Changes][9]
Or, you can click on the **manage track changes button** on the bottom toolbar, to get an overview of all changes as shown in the screenshot below:
![Manage Changes window \(Click to enlarge the image\)][10]
#### Accepting/Rejecting Changes
To accept or reject a particular change, first, click on that particular change.
Next, click on the **Accept change button** (with a tick mark) to accept it. Or use the **Reject button** (with a cross mark) to reject the change.
![Accept and Reject Button \(Click to enlarge the image\)][11]
If you accept a change, the text will be fixed and will change to black color in case of addition. If you are removing something, the highlighted word will be removed.
On the other hand, if you reject a change, the addition/deletion will be cancelled.
Similarly, you can accept/reject all the changes at once, using the **Accept All/ Reject All** buttons. It is placed adjacent to the individual accept and reject buttons on the toolbar.
![Accept or Reject All Changes][12]
Another way to accept and reject changes is through the **Manage Changes** dialog box. Click on the Manage Changes button on the bottom toolbar, as described in the previous section.
Now, you can select a particular change and then use the bottom buttons for Accepting/Rejecting.
![Manage Changes through Dialog Box][13]
#### Inserting Comments
You can either insert a comment at a random position, or add a comment to a particular change.
To add a comment on a place, click on that particular position (to bring the cursor there). Next, click on the **Add Comment button** on the bottom toolbar.
![Click on Add Comment Button \(Click to enlarge the image\)][14]
This will open up a right sidebar while pointing out the place you will be adding the comment. Type the message there as shown below:
![Enter Comment Text][15]
You can click on the adjacent rectangle button to get several actions on that comment. Use the top “ **Comment** ” button, to hide/unhide the comments.
Similarly, you can click on a particular change, and then click on the “ **Insert Track Change Comment** ” button.
![Track Change Comment \(Click to enlarge the image\)][16]
It will look a bit different, but serves the same purpose. In the next dialog box, enter the comment. Now, click OK to save the comment.
![Insert Track Change Comment \(Click to enlarge the image\)][17]
You have now added a comment to that change.
So, when you head to the **Manage Changes** dialog box, you can see the comment right to the mentioned change. Pretty handy, right? Of course, one of the [best open-source alternatives to Microsoft Office][18]. You should not expect any less 😉
![View Track Change Comment][19]
#### Save a Version
While there is an auto-save feature to protect your document from a crash, there is no auto-versioning of documents.
So, once you have made some changes, you can save a version of it.
Go to **File → Versions**.
![File → Versions][20]
Here, you can save the current version of the document.
![Click on Save New Version][21]
Insert a version comment to identify it easily, and then click OK.
![Insert Version Comment][22]
You can now view multiple versions of the document by navigating to the same menu option. Instead of saving a new version, you can select the existing version and hit " **Open** " to access it.
**Suggested Read 📖**
![][7]
#### Filter Changes
LibreOffice provides a way to filter the changes based on author, time range, etc. This should help you find certain changes fast when there are lots of it.
First, hit the **Manage Changes** button to get to the dialog box where you see the changes. Here, go to the **Filters Tab**.
![Filter based on Author][23]
Next, you can set the criteria.
I have set to **view the changes made by the author “It's FOSS”**. So, when we go to the “List” tab, only changes made by author “It's FOSS” will be listed.
![Changes made by a particular Author][24]
#### Compare with Original Document
Once you have completed collaborative editing, you can compare the changed document with the original one. This needs the original one to be saved separately.
So, click on the **Compare button** on the bottom toolbar.
![Click on Compare Button][25]
Now, select the original file from the file chooser.
![Select the Document to Compare][26]
This will give you a highlighted document with changes from the original, along with the Manage Changes dialog box.
![Changes compared][27]
💡
This is useful, when one of the authors has modified a document without recording them.
#### Merge with Original
Once you have completed the changes, save the document.
🚧
If you are planning to merge the collaboratively edited document into an original document, you should not accept changes on the editing document.
Once you have made all the changes, save it, without accepting the changes.
Now, open the original document in LibreOffice and go to **Edit → Track Changes → Merge Document**.
![Select Merge Documents option][28]
From the file chooser, select the edited document, and click Open.
![Open Edited Document to Merge][29]
On the next screen, changes to the original document will be listed, along with a “Manage Changes” dialog box. Click on “ **Accept All** ” and the original document will be merged with the changes.
![Accept All Changes to Merge][30]
**Suggested Read 📖**
![][7]
### Wrapping Up
LibreOffice is a feature-packed document suite. You can do all sorts of things when compared to the proprietary alternatives.
You can also explore more [LibreOffice tips][31] to use it more effectively.
However, many users fail to find/know how to use a certain feature. With this article, I hope, you can now quickly track changes and versions of your document for a good collaborative editing experience.
_💬 What is your favorite part of the LibreOffice Writer editing experience? Any other feature that you want to highlight? Share all about it in the comments below._
--------------------------------------------------------------------------------
via: https://itsfoss.com/libreoffice-version-control/
作者:[Sreenath][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sreenath/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-libreoffice-ubuntu/
[2]: https://itsfoss.com/content/images/2023/11/Click-on-record-to-start-recording-changes.png
[3]: https://itsfoss.com/content/images/2023/11/enable-track-changes-toolbar.png
[4]: https://itsfoss.com/content/images/2023/11/track-changes-toolbar.png
[5]: https://itsfoss.com/content/images/2023/11/Click-on-tools-and-then-select-options.png
[6]: https://itsfoss.com/content/images/2023/11/set-user-data.png
[7]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png
[8]: https://itsfoss.com/content/images/2023/11/tracking-changes-in-libreoffice-with-annotated-words.png
[9]: https://itsfoss.com/content/images/2023/11/changes-made-by-who.png
[10]: https://itsfoss.com/content/images/2023/11/manage-changes-1.png
[11]: https://itsfoss.com/content/images/2023/11/accept-and-reject-button.png
[12]: https://itsfoss.com/content/images/2023/11/accept-or-reject-all.png
[13]: https://itsfoss.com/content/images/2023/11/manage-changes-through-dialog-box.png
[14]: https://itsfoss.com/content/images/2023/11/Click-on-add-comment-button.png
[15]: https://itsfoss.com/content/images/2023/11/enter-comment-on-the-appropriate-place.png
[16]: https://itsfoss.com/content/images/2023/11/click-on-add-track-change-comment.png
[17]: https://itsfoss.com/content/images/2023/11/inserting-track-change-comment.png
[18]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[19]: https://itsfoss.com/content/images/2023/11/view-track-change-comment.png
[20]: https://itsfoss.com/content/images/2023/11/go-to-file-versions.png
[21]: https://itsfoss.com/content/images/2023/11/click-on-save-new-version-button.png
[22]: https://itsfoss.com/content/images/2023/11/insert-a-version-comment-and-click-OK.png
[23]: https://itsfoss.com/content/images/2023/11/filter-based-on-author.png
[24]: https://itsfoss.com/content/images/2023/11/Changes-made-by-author-itsfoss.png
[25]: https://itsfoss.com/content/images/2023/11/Click-on-compare-button.png
[26]: https://itsfoss.com/content/images/2023/11/select-the-original-document-to-compare.png
[27]: https://itsfoss.com/content/images/2023/11/compared-changes-highlighted.png
[28]: https://itsfoss.com/content/images/2023/11/select-merge-documents-option.png
[29]: https://itsfoss.com/content/images/2023/11/open-edited-document-to-merge.png
[30]: https://itsfoss.com/content/images/2023/11/accept-all-changes-to-merge-1.png
[31]: https://itsfoss.com/libreoffice-tips/

View File

@ -1,568 +0,0 @@
[#]: subject: "Making a DNS query in Ruby from scratch"
[#]: via: "https://jvns.ca/blog/2022/11/06/making-a-dns-query-in-ruby-from-scratch/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972"
[#]: translator: "Drwhooooo"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
从零开始,运用 Ruby 语言创建一个 DNS 查询
======
大家好!
前段时间我写了一篇关于“[如何用 Go 语言建立一个简易的 DNS 解析器][1]”的帖子。
那篇帖子里我没写有关“如何生成以及解析 DNS 查询请求”的内容,因为我觉得这很无聊,不过一些伙计指出他们不知道如何解析和生成 DNS 查询请求,并且对此很感兴趣。
我开始好奇了——解析 DNS _能_花多大功夫事实证明编写一段 120 行精巧的Ruby语言代码组成的程序我们就可以做到这并不是很困难。
所以,在这里有一个如何生成 DNS 查询请求以及如何解析DNS响应报文的速成教学我们会用 Ruby 语言完成这项任务,主要是因为不久以后我将在一场 Ruby 语言大会上发表观点,而这篇博客帖的部分内容是为了那场谈话做准备的。:)
(我尽可能让代码在那些之前不知道 Ruby 语言的人们面前保持可读性,我也只用过非常基础的 Ruby 语言代码。)
最后,我们就能制作一个非常简易的 Ruby 版本的`dig`工具,能够查找域名,就像这样:
```
$ ruby dig.rb example.com
example.com 20314 A 93.184.216.34
```
整个大概 120 行左右,并没有 _那么_ 多。 (最终程序在这里,[dig.rb][2],如果你想略过细则,单纯想去读代码的话。)
我们不会去实现之前帖中所说的“一个 DNS 解析器是如何运作的?”,因为我们已经做过了。
那么我们开始吧!
顺便提一嘴,我会试着去解释为什么如果你们想尽办法从头理解 DNS 查询是如何布局的,你们就能够自己去解决这一档子事儿。大多数情况下的答案是“搜索 Wireshark”以及“阅读 RFC 1035即 DNS 的请求注解”。
### 步骤一:打开一个 UDP 套接字
我们需要切实发送我们的DNS查询那么为了达成目的我们就需要打开一个 UDP 套接字。我们会将我们的 DNS 查询发送至 `8.8.8.8`,即 Google 服务器。
下面是用于建立与 `8.8.8.8` 的 UDP 联接,端口为 53(UDP 端口) 的代码。
```
require 'socket'
sock = UDPSocket.new
sock.bind('0.0.0.0', 12345)
sock.connect('8.8.8.8', 53)
```
##### 关于 UDP 的一个小贴士:
我不再多说更多关于 UDP 的内容了,但是我还是会说有关计算机网络基础知识中的“数据包”(数据包,即一个字节串),而在这个程序中我会教你们关于计算机网络里能做到的最简单的事情——传递一个数据包,并接收一个数据包作为响应。
所以 UDP 是一个传递数据包的最简单的方法。
这是发送 DNS 查询最寻常的方法,不过你还可以用 TCP 或者 DNS-over-HTTPS。
##### 步骤二:自 Wireshark 复制一个 DNS 查询
那么:要是我们都没法理解 DNS 是如何运作的但我们还是想尽快发送一个能运行的 DNS 查询。那么最简单的方式便是,粘过来个已经能够运行的来玩玩!只要确保我们的 UDP 连接运行正常就行。
所以这就是我们接下来要做的,运用 Wireshark (一个绝赞的数据包分析工具)
我之前的操作大致如下:
1. 打开 Wireshark,点击 “capture” 按钮。
2. (在搜索栏)输入 `udp.port == 53` 作为一个筛选条件,然后键盘点击 Enter。
3. 在我的终端运行 `ping example.com`(用来生成一个 DNS 查询)。
4. 点击 DNS 查询显示“Standard query A example.com”
“A”查询类型“example.com”域名“Standard query”查询类型描述
5. 右键点击位于左下角面板上的“Domain Name System (query)”。
6. 点击Copy ——> as a hex stream
7. 现在的话
`b96201000001000000000000076578616d706c6503636f6d0000010001`
就留存在了我后台的剪贴板上,之后会用在我的 Ruby 程序里。好欸!
##### 步骤三:解析 16 进制数据流并发送 DNS 查询
现在我们能够发送我们的 DNS 查询到 `8.8.8.8` 了!就像这样,我们只需要再加 5 行代码:
```
hex_string = "b96201000001000000000000076578616d706c6503636f6d0000010001"
bytes = [hex_string].pack('H*')
sock.send(bytes, 0)
# get the reply
reply, _ = sock.recvfrom(1024)
puts reply.unpack('H*')
```
`[hex_string].pack('H*')` 意思就是将我们的 16 位字符串翻译成一个字节串。此时我们不知道这组数据到底是什么意思但是很快我们就会知道了。
我们还可以借此机会运用 `tcpdump` ,确认程序是否正常进行以及发送有效数据。我是这么做的:
1. 在一个终端选项卡下执行 `sudo tcpdump -ni any port 53 and host 8.8.8.8` 语句
2. 在另一个不同的终端指标卡下,运行[这个程序][3]`ruby dns-1.rb`
以下是输出结果:
```
$ sudo tcpdump -ni any port 53 and host 8.8.8.8
08:50:28.287440 IP 192.168.1.174.12345 > 8.8.8.8.53: 47458+ A? example.com. (29)
08:50:28.312043 IP 8.8.8.8.53 > 192.168.1.174.12345: 47458 1/0/0 A 93.184.216.34 (45)
```
非常棒 —— 我们可以看到 DNS 请求(”这个 `example.com` 的 IP 地址在哪里以及响应“在93.184.216.34”)。所以一切运行正常。现在只需要(你懂的)—— 搞清我们是如何生成并解析这组数据的。
##### 步骤四:学一点点 DNS 查询是怎样布局的
现在我们有一个关于 `example.com` 的 DNS 查询,让我们了解它的含义。
下方是我们的查询16 位进制格式):
```
b96201000001000000000000076578616d706c6503636f6d0000010001
```
如果你在 Wireshark 上搜索,你就能看见这个查询它由两部分组成:
* **请求头** **the header**`b96201000001000000000000`
* **语句本身** **the question**`076578616d706c6503636f6d0000010001`
##### 步骤五:制作请求头 the header
我们这一步的目标就是制作字节串 `b96201000001000000000000`(借助一个 Ruby 函数 而不是把它硬编码出来)
硬编码hardcode指在软件实现上将输出或输入的相关参数例如路径、输出的形式或格式直接以**常量**的方式撰写在源代码中,而非在运行期间由外界指定的设置、资源、数据或格式做出适当回应。)
那么:请求头是 12 个字节。那些个 12 字节到底意味着什么呢?如果你在 Wireshark 里看看(亦或者阅读 [RFC-1035][4]),你就能理解:它是由 6 个 2 字节大小的数字串联在一起组成的。
这六个数字与查询的 ID、标志、问题计数、回答资源记录数、权威名称服务器记录数、附加资源记录数在一个数据包内。
我们不需要在意这些都是些什么东西 —— 我们只需要把这六个数字输进去就行。
但所幸我们知道该输哪六位数,因为我们就是为了直观地生成字符串`b96201000001000000000000`。
所以这里有一个制作请求头的函数(注意:这里没有`return`因为在Ruby语言里如果处在函数最后一行是不需要写`return`语句的):
```
def make_question_header(query_id)
# id, flags, num questions, num answers, num auth, num additional
[query_id, 0x0100, 0x0001, 0x0000, 0x0000, 0x0000].pack('nnnnnn')
end
```
上面内容非常的短,主要因为除了查询 ID ,其余所有内容都由我们硬编码写了出来。
##### 什么是 `nnnnnn`?
可能能想知道在`.pack('nnnnnn')`中的`nnnnnn`是个什么意思。那是一个向`.pack()`解释如何将那个 6 个数字组成的数据转换成一个字节串的一个格式字符串。
[`.pack` 的文件在这里][5],其中描述了`n`的含义其实是“将其表示为16 位无符号,网络(大端序)字节序’”。
大端序Big-endian指将高位字节存储在低地址,低位字节存储在高地址的方式。)
16 比特等同于 2 字节,同时我们需要用网络字节序,因为这属于计算机网络范畴。我不会再去解释什么是字节序了(尽管我确实有[一本尝试去描述它的自制漫画][6]
##### 测试请求头代码
让我们快速检测一下我们的 `make_question_header` 函数运行情况。
```
puts make_question_header(0xb962) == ["b96201000001000000000000"].pack("H*")
```
这里运行后输出“true”的话我们就赢了。
好了我们接着继续。
##### 步骤五:为域名进行编码
下一步我们需要生成**问题本身**(“`example.com` 的 IP 是什么?“)。这里有三个部分:
* **域名**比如说“example.com”
* **查询类型**比如说”A“代表”IPv4**A**ddress“
* **查询类**和第二条类似1代表 **IN****IN**ternet
以上三个中麻烦的就是域名,让我们写个函数对付这个。
`example.com` 以 16 进制被编码进一个 DNS 查询中,如`076578616d706c6503636f6d00`。这有什么含义吗?
如果我们把这些字节以 ASCII 值翻译出来,结果会是这样:
```
076578616d706c6503636f6d00
7 e x a m p l e 3 c o m 0
```
你看每一个片段(像`example`) 的前面都会显示它的长度(像`7`)。
下面是有关将`example`翻译成`7 e x a m p l e 3 c o m 0`的Ruby 代码:
```
def encode_domain_name(domain)
domain
.split(".")
.map { |x| x.length.chr + x }
.join + "\0"
end
```
另外,为结束生成问题本身这一环节,我们只需要在域名结尾追加上它(查询)的类型和类。
##### 步骤六:编写 `make_dns_query`
下面是制作一个 DNS 查询的最终函数:
```
def make_dns_query(domain, type)
query_id = rand(65535)
header = make_question_header(query_id)
question = encode_domain_name(domain) + [type, 1].pack('nn')
header + question
end
```
[这是目前我们写的所有代码的文件链接dns-2.rb][7] —— 目前仅29行。
##### 接下来是 解析 的阶段
现在我尝试去解析一个 DNS 查询,我们到了个硬核的部分:解析。同样的,我们会将其分成不同部分:
* 解析一个 DNS 的请求头
* 解析一个 DNS 的名称
* 解析一个 DNS 的记录
这几个部分中最难的(可能跟你想的不一样)就是:”解析一个 DNS 的名称“。
##### 步骤七:解析 DNS 的请求头
让我们先从最简单的部分开始。我们之前已经讲过关于它那六个数字是如何串联在一起的了。
那么我们现在要做的就是:
* 读其首部 12 个字节
* 将其转换成一个由 6 个数字组成的数组
* 将这六个数字归于一个类中方便使用
以下是具体进行工作的 Ruby 代码:
```
class DNSHeader
attr_reader :id, :flags, :num_questions, :num_answers, :num_auth, :num_additional
def initialize(buf)
hdr = buf.read(12)
@id, @flags, @num_questions, @num_answers, @num_auth, @num_additional = hdr.unpack('nnnnnn')
end
end
```
注: `attr_reader` 是 Ruby 里的内容意思是”使这些直接变量像方法一样易用“。所以我们可以调用`header.flags`来查看`@flags`变量。
我们也可以借助`DNSheader(buf)`调用这个,也不差。
让我们往最难的那一步挪挪:解析一个域名。
##### 步骤八:解析一个域名
首先,让我们写其中的一部分:
```
def read_domain_name_wrong(buf)
domain = []
loop do
len = buf.read(1).unpack('C')[0]
break if len == 0
domain << buf.read(len)
end
domain.join('.')
end
```
这里会重复读取一个字节的数据然后直到读取到长度为0的数据之前会将长度数据存在一个字符串内。
这里运行正常的话,我们就能在我们的 DNS 响应头第一次看见一个域名(`example.com`)。
##### 关于域名方面的麻烦:压缩!
但当 `example.com` 第二次出现的时候,我们的运行出现了问题 —— 在 Wireshark 中它报告上显示输出的域的值为含糊不清的2个字节的`c00c`。
这种情况就是所谓的 **DNS 域名压缩**,并且如果我们想再解析其它的响应我们就要先把这个实现完。
这个索性没有**那——么**难。这里 `c00c` 的含义就是:
* 前两个比特(`0b11.....`)意思是”前面有 DNS 域名压缩!”
* 而余下的14比特是一个整型。这种情况下的整数为`12``0x0c`意思是”返回至数据包中的第12个字节处使用在那里找的域名“
如果你想阅读更多有关 DNS 域名压缩之类的内容。我找到了相关更容易让你理解这方面内容的文章:[关于 DNS RFC 的释义][8]。
##### 步骤九:实现 DNS 域名压缩
下面我们需要我们的 `read_domain_name` 有一个更复杂的版本。
如下所示:
```
domain = []
loop do
len = buf.read(1).unpack('C')[0]
break if len == 0
if len & 0b11000000 == 0b11000000
# weird case: DNS compression!
second_byte = buf.read(1).unpack('C')[0]
offset = ((len & 0x3f) << 8) + second_byte
old_pos = buf.pos
buf.pos = offset
domain << read_domain_name(buf)
buf.pos = old_pos
break
else
# normal case
domain << buf.read(len)
end
end
domain.join('.')
```
这里具体是这么个情况:
* 如果前两个比特为`0b11` ,那么我们就需要做 DNS 域名压缩。那么:
* 读取第二个字节并用一点儿算法将其转化为偏移量。
* 在缓冲区保存当前位置。
* 在我们计算偏移量的位置上读取域名
* 在缓冲区存储我们的位置。
可能看起来很乱,但是这是解析 DNS 响应的部分中最难的一处了,我们快搞定了!
##### 一个关于 DNS 压缩的漏洞
有些人可能会说,有恶意行为者可以借助这个代码,通过一个带 DNS 压缩条目的 DNS 响应指向这个响应本身,这样 `read_domain_name` 就会陷入无限循环。我才不会改进它(这个代码已经够复杂了好吗!)但一个真正的 DNS 解析器确实会更巧妙地处理它。比如,这里有个[能够避免在 miekg/dns 中陷入无限循环的代码][9]。
##### 步骤十:解析一个 DNS 查询
你可能在想:”为什么我们需要解析一个 DNS 查询?这是一个响应啊!”
但每一个 DNS 响应有它原本自己的查询,所以我们有必要去解析它。
这是解析 DNS 查询的代码:
```
class DNSQuery
attr_reader :domain, :type, :cls
def initialize(buf)
@domain = read_domain_name(buf)
@type, @cls = buf.read(4).unpack('nn')
end
end
```
内容不是太多类型和类各占2个字节。
##### 步骤十一:解析一个 DNS 记录
最让人兴奋的部分 —— DNS 记录是我们的查询数据存放的地方即这个“rdata 区域”(“记录数据字段”)就是我们会在 DNS 查询对应的响应中获得的 IP 地址所驻留的地方。
代码如下:
```
class DNSRecord
attr_reader :name, :type, :class, :ttl, :rdlength, :rdata
def initialize(buf)
@name = read_domain_name(buf)
@type, @class, @ttl, @rdlength = buf.read(10).unpack('nnNn')
@rdata = buf.read(@rdlength)
end
```
我们还需要让这个 `rdata` 区域更加可读。记录数据字段的实际用途取决于记录类型 —— 比如一个“A”记录就是一个四个字节的 IP 地址而一个“CNAME”记录则是一个域名。
所以下面的代码可以让请求数据更可读:
```
def read_rdata(buf, length)
@type_name = TYPES[@type] || @type
if @type_name == "CNAME" or @type_name == "NS"
read_domain_name(buf)
elsif @type_name == "A"
buf.read(length).unpack('C*').join('.')
else
buf.read(length)
end
end
```
这个函数使用了 `TYPES` 这个哈希表将一个记录类型映射为一个更可读的名称:
```
TYPES = {
1 => "A",
2 => "NS",
5 => "CNAME",
# there are a lot more but we don't need them for this example
}
```
`read.rdata` 中最有趣的一部分可能就是这一行`buf.read(length).unpack('C*').join('.')` —— 像是在说:”嘿!有个 IP 地址4个字节就将它转换成一组四个数字组成的数组然后数字互相之间用 . ’联个谊吧。”
##### 解析 DNS 响应的收尾工作
现在我们正式准备好解析 DNS 响应了!
工作代码如下所示:
```
class DNSResponse
attr_reader :header, :queries, :answers, :authorities, :additionals
def initialize(bytes)
buf = StringIO.new(bytes)
@header = DNSHeader.new(buf)
@queries = (1..@header.num_questions).map { DNSQuery.new(buf) }
@answers = (1..@header.num_answers).map { DNSRecord.new(buf) }
@authorities = (1..@header.num_auth).map { DNSRecord.new(buf) }
@additionals = (1..@header.num_additional).map { DNSRecord.new(buf) }
end
end
```
这里大部分内容就是在调用之前我们写过的其他函数来协助解析 DNS 响应。
如果`@header.num_answers` 的值为2代码会使用了`(1..@header.num_answers).map` 这个巧妙的结构创建一个包含两个 DNS 记录的数组。(在 Ruby 中可能有些魔幻,但我就是觉得有趣,但愿不会影响可读性。)
我们可以把这段代码整合进我们的主函数中,就像这样:
```
sock.send(make_dns_query("example.com", 1), 0) # 1 is "A", for IP address
reply, _ = sock.recvfrom(1024)
response = DNSResponse.new(reply) # parse the response!!!
puts response.answers[0]
```
尽管输出结果看起来有点辣眼睛(类似于`#<DNSRecord:0x00000001368e3118>`)所以我们需要编写一些好看的输出代码,提升它的可读性。
##### 步骤十三:对于我们输出的 DNS 记录进行美化
我们需要向 DNS 记录增加一个 `.to_s` 区域从而让它有一个更良好的字符串展示方式。而者只是做为一行方法的代码在 `DNSRecord` 中存在。
```
def to_s
"#{@name}\t\t#{@ttl}\t#{@type_name}\t#{@parsed_rdata}"
end
```
你可能也注意到了我忽略了 DNS 记录中的`class`区域。那是因为它通常都比较同质化很多都为代表”internet“的 IN所以我觉得它是个冗余。虽然很多 DNS 工具(像真正的 `dig`)会输出 class。
##### 大功告成!
这是我们最终的主函数:
```
def main
# connect to google dns
sock = UDPSocket.new
sock.bind('0.0.0.0', 12345)
sock.connect('8.8.8.8', 53)
# send query
domain = ARGV[0]
sock.send(make_dns_query(domain, 1), 0)
# receive & parse response
reply, _ = sock.recvfrom(1024)
response = DNSResponse.new(reply)
response.answers.each do |record|
puts record
end
```
我不觉得我们还能再补充什么 —— 我们连接、发送一个查询、输出每一个回答,然后退出。完事儿!
```
$ ruby dig.rb example.com
example.com 18608 A 93.184.216.34
```
你可以在这里查看最终程序:[dig.rb][2]。可以根据你的喜好给它增加更多特性,就比如说:
* 为其他查询类型添加美化输出。
* 输出 DNS 响应时增加“授权”和“可追加”的选项
* 重试
* 确保我们看到的 DNS 响应和我们发送查询的响应( ID 信息必须是对的上的!)
另外如果我再本帖出现了什么错误,就[在推特和我聊聊][10]。(我写的比较赶所以可能还是会有些错误)
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2022/11/06/making-a-dns-query-in-ruby-from-scratch/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2022/02/01/a-dns-resolver-in-80-lines-of-go/
[2]: https://gist.github.com/jvns/1e5838a53520e45969687e2f90199770
[3]: https://gist.github.com/jvns/aa202b1edd97ae261715c806b2ba7d39
[4]: https://datatracker.ietf.org/doc/html/rfc1035#section-4.1.1
[5]: https://ruby-doc.org/core-3.0.0/Array.html#method-i-pack
[6]: https://wizardzines.com/comics/little-endian/
[7]: https://gist.github.com/jvns/3587ea0b4a2a6c20dcfd8bf653fc11d9
[8]: https://datatracker.ietf.org/doc/html/rfc1035#section-4.1.4
[9]: https://link.zhihu.com/?target=https%3A//github.com/miekg/dns/blob/b3dfea07155dbe4baafd90792c67b85a3bf5be23/msg.go%23L430-L435
[10]: https://twitter.com/b0rk