第一版本,完整翻译

This commit is contained in:
bai 2020-05-06 11:27:43 +08:00
parent 40beccdf1b
commit 69f9a39b72
73 changed files with 5465 additions and 1909 deletions

View File

@ -9,43 +9,43 @@ more information and coordination
-->
- [Introduction](./intro/index.md)
- [Hardware](./intro/hardware.md)
- [简介](./intro/index.md)
- [硬件](./intro/hardware.md)
- [`no_std`](./intro/no-std.md)
- [Tooling](./intro/tooling.md)
- [Installation](./intro/install.md)
- [工具](./intro/tooling.md)
- [安装](./intro/install.md)
- [Linux](./intro/install/linux.md)
- [MacOS](./intro/install/macos.md)
- [Windows](./intro/install/windows.md)
- [Verify Installation](./intro/install/verify.md)
- [Getting started](./start/index.md)
- [验证安装](./intro/install/verify.md)
- [入门](./start/index.md)
- [QEMU](./start/qemu.md)
- [Hardware](./start/hardware.md)
- [Memory-mapped Registers](./start/registers.md)
- [Semihosting](./start/semihosting.md)
- [Panicking](./start/panicking.md)
- [Exceptions](./start/exceptions.md)
- [Interrupts](./start/interrupts.md)
- [硬件](./start/hardware.md)
- [内存映射寄存器](./start/registers.md)
- [半主机](./start/semihosting.md)
- [恐慌](./start/panicking.md)
- [异常](./start/exceptions.md)
- [中断](./start/interrupts.md)
- [IO](./start/io.md)
- [Peripherals](./peripherals/index.md)
- [A first attempt in Rust](./peripherals/a-first-attempt.md)
- [The Borrow Checker](./peripherals/borrowck.md)
- [Singletons](./peripherals/singletons.md)
- [Static Guarantees](./static-guarantees/index.md)
- [Typestate Programming](./static-guarantees/typestate-programming.md)
- [Peripherals as State Machines](./static-guarantees/state-machines.md)
- [Design Contracts](./static-guarantees/design-contracts.md)
- [Zero Cost Abstractions](./static-guarantees/zero-cost-abstractions.md)
- [Portability](./portability/index.md)
- [Concurrency](./concurrency/index.md)
- [Collections](./collections/index.md)
- [Tips for embedded C developers](./c-tips/index.md)
- [外设](./peripherals/index.md)
- [初试Rust](./peripherals/a-first-attempt.md)
- [借用检查器](./peripherals/borrowck.md)
- [单例](./peripherals/singletons.md)
- [静态保证](./static-guarantees/index.md)
- [类型状态机编程](./static-guarantees/typestate-programming.md)
- [外设作为状态机](./static-guarantees/state-machines.md)
- [设计合约](./static-guarantees/design-contracts.md)
- [零成本抽象](./static-guarantees/zero-cost-abstractions.md)
- [可移植性](./portability/index.md)
- [并发](./concurrency/index.md)
- [容器](./collections/index.md)
- [嵌入式C开发人员的技巧](./c-tips/index.md)
<!-- TODO: Define Sections -->
- [Interoperability](./interoperability/index.md)
- [A little C with your Rust](./interoperability/c-with-rust.md)
- [A little Rust with your C](./interoperability/rust-with-c.md)
- [Unsorted topics](./unsorted/index.md)
- [Optimizations: The speed size tradeoff](./unsorted/speed-vs-size.md)
- [互操作性](./interoperability/index.md)
- [Rust中使用C代码](./interoperability/c-with-rust.md)
- [C中使用Rust代码](./interoperability/rust-with-c.md)
- [其他主题](./unsorted/index.md)
- [优化:速度大小的权衡](./unsorted/speed-vs-size.md)
---

View File

@ -1,16 +1,14 @@
# Appendix A: Glossary
# 附录A术语表
The embedded ecosystem is full of different protocols, hardware components and
vendor-specific things that use their own terms and abbreviations. This Glossary
attempts to list them with pointers for understanding them better.
Term | Meaning
-------------|--------
I2C | Sometimes referred to as `I² C` or Inter-IC. It is a protocol meant for hardware communication within a single integrated circuit. See [i2c.info] for more details
SPI | Serial Peripheral Interface
USART | Universal synchronous and asynchronous receiver-transmitter
UART | Universal asynchronous receiver-transmitter
FPU | Floating-point Unit. A 'math processor' running only operations on floating-point numbers
PAC | Peripheral Access Crate
术语|含义
------------- | --------
I2C |有时也称为`I²C`或Inter-IC。用于在单个集成电路内的硬件间通信。有关更多详细信息请参见[i2c.info]。
SPI |串行外设接口
USART |通用同步和异步收发器
UART |通用异步收发器
FPU |浮点处理单元。进行浮点数运算的“数学处理器”
PAC |外设访问crate
[i2c.info]: https://i2c.info/
[i2c.info]:https://i2c.info/

View File

@ -0,0 +1,14 @@
# Appendix A: Glossary
The embedded ecosystem is full of different protocols, hardware components and vendor-specific things that use their own terms and abbreviations. This Glossary attempts to list them with pointers for understanding them better.
Term | Meaning
-------------|--------
I2C | Sometimes referred to as `I² C` or Inter-IC. It is a protocol meant for hardware communication within a single integrated circuit. See [i2c.info] for more details
SPI | Serial Peripheral Interface
USART | Universal synchronous and asynchronous receiver-transmitter
UART | Universal asynchronous receiver-transmitter
FPU | Floating-point Unit. A 'math processor' running only operations on floating-point numbers
PAC | Peripheral Access Crate
[i2c.info]: https://i2c.info/

View File

@ -1,37 +1,24 @@
# Tips for embedded C developers
# 嵌入式C开发人员的技巧
This chapter collects a variety of tips that might be useful to experienced
embedded C developers looking to start writing Rust. It will especially
highlight how things you might already be used to in C are different in Rust.
本章收集了各种技巧这些技巧对于希望开始编写Rust的经验丰富的嵌入式C开发人员可能有用。它特别强调了您可能已经在C语言中习惯的事情在Rust中的不同之处。
## Preprocessor
## 预处理器
In embedded C it is very common to use the preprocessor for a variety of
purposes, such as:
在C语言中预处理器有多种用途例如
* Compile-time selection of code blocks with `#ifdef`
* Compile-time array sizes and computations
* Macros to simplify common patterns (to avoid function call overhead)
* #ifdef在编译时选择代码块
* 编译时数组大小和计算
* 宏可简化常见模式(避免函数调用开销)
In Rust there is no preprocessor, and so many of these use cases are addressed
differently. In the rest of this section we cover various alternatives to
using the preprocessor.
Rust没有预处理器因此许多用例的处理方式有所不同。在本节的其余部分我们将介绍预处理器的各种替代方法。
### Compile-Time Code Selection
### 编译时代码选择
The closest match to `#ifdef ... #endif` in Rust are [Cargo features]. These
are a little more formal than the C preprocessor: all possible features are
explicitly listed per crate, and can only be either on or off. Features are
turned on when you list a crate as a dependency, and are additive: if any crate
in your dependency tree enables a feature for another crate, that feature will
be enabled for all users of that crate.
在Rust中与#ifdef ... #endif最接近的匹配项是[Cargo features]。这比C预处理器更加正式每个crate都明确列出了所有可能的特性(features)并且只能打开或关闭。当您将一个crate作为依赖项列出时,特性已经被打开如果您的依赖关系树中的任何crate为另一个板条箱启用了某个特性则该crate的这个特性在所有的crate中都会启用。
[Cargo features]: https://doc.rust-lang.org/cargo/reference/manifest.html#the-features-section
[Cargo features]:https://doc.rust-lang.org/cargo/reference/manifest.html#the-features-section
For example, you might have a crate which provides a library of signal
processing primitives. Each one might take some extra time to compile or
declare some large table of constants which you'd like to avoid. You could
declare a Cargo feature for each component in your `Cargo.toml`:
例如您要实现一个提供信号处理原语的crate,你想避免每个人都编译或者声明一个巨大的常量表。您可以在`Cargo.toml`中为每个组件声明一个Cargo特性
```toml
[features]
@ -39,7 +26,8 @@ FIR = []
IIR = []
```
Then, in your code, use `#[cfg(feature="FIR")]` to control what is included.
然后,在您的代码中使用`#[cfg(feature="FIR")]`来控制包含的内容。
```rust
/// In your top-level lib.rs
@ -51,29 +39,17 @@ pub mod fir;
pub mod iir;
```
You can similarly include code blocks only if a feature is _not_ enabled, or if
any combination of features are or are not enabled.
Additionally, Rust provides a number of automatically-set conditions you can
use, such as `target_arch` to select different code based on architecture. For
full details of the conditional compilation support, refer to the
[conditional compilation] chapter of the Rust reference.
同样包含某个代码块的条件可以是只有某个特性未启用,或者某些特性组合启用或者未启用。
[conditional compilation]: https://doc.rust-lang.org/reference/conditional-compilation.html
另外Rust提供了许多可以自动使用的条件例如`target_arch`可以根据架构选择不同的代码。有关条件编译支持的完整详细信息请参见Rust手册的[条件编译]一章。
The conditional compilation will only apply to the next statement or block. If
a block can not be used in the current scope then the `cfg` attribute will
need to be used multiple times. It's worth noting that most of the time it is
better to simply include all the code and allow the compiler to remove dead
code when optimising: it's simpler for you and your users, and in general the
compiler will do a good job of removing unused code.
[条件编译]:https://doc.rust-lang.org/reference/conditional-compilation.html
### Compile-Time Sizes and Computation
条件编译仅适用于下一条语句或块。如果是多条语句或者多个代码块,那么`cfg`属性需要多次使用。值得注意的是,在大多数情况下,包含所有代码并允许编译器在优化时删除无效代码会更好:对于您和您的用户来说更简单,并且通常来说,编译器会很好地删除未使用的代码。
Rust supports `const fn`, functions which are guaranteed to be evaluable at
compile-time and can therefore be used where constants are required, such as
in the size of arrays. This can be used alongside features mentioned above,
for example:
### 编译时大小和计算
Rust支持`const fn`,这些函数保证在编译时可以求值,因此可以在需要常量的地方使用,例如数组大小。可以与上述功能一起使用,例如:
```rust
const fn array_size() -> usize {
@ -86,83 +62,48 @@ const fn array_size() -> usize {
static BUF: [u32; array_size()] = [0u32; array_size()];
```
These are new to stable Rust as of 1.31, so documentation is still sparse. The
functionality available to `const fn` is also very limited at the time of
writing; in future Rust releases it is expected to expand on what is permitted
in a `const fn`.
这些新特性刚刚在Rust 1.31版本稳定下来,因此文档仍然很少。在编写本文时,`const fn`可用的功能非常有限。在将来的Rust版本中有望扩展`const fn`允许的范围。
### Macros
### 宏
Rust provides an extremely powerful [macro system]. While the C preprocessor
operates almost directly on the text of your source code, the Rust macro system
operates at a higher level. There are two varieties of Rust macro: _macros by
example_ and _procedural macros_. The former are simpler and most common; they
look like function calls and can expand to a complete expression, statement,
item, or pattern. Procedural macros are more complex but permit extremely
powerful additions to the Rust language: they can transform arbitrary Rust
syntax into new Rust syntax.
Rust提供了非常强大的[宏系统]。相比C预处理器几乎直接在源代码的文本上运行Rust宏系统在更高级别上运行。 Rust宏有两种类型声明宏和过程宏。前者更简单也最常见宏看起来像函数调用并且可以扩展为完整的表达式语句项目或模式。过程宏更加复杂但是功能也更强大,它可以将任意Rust语法转换为新的Rust语法。
[macro system]: https://doc.rust-lang.org/book/ch19-06-macros.html
[宏系统]:https://doc.rust-lang.org/book/ch19-06-macros.html
In general, where you might have used a C preprocessor macro, you probably want
to see if a macro-by-example can do the job instead. They can be defined in
your crate and easily used by your own crate or exported for other users. Be
aware that since they must expand to complete expressions, statements, items,
or patterns, some use cases of C preprocessor macros will not work, for example
a macro that expands to part of a variable name or an incomplete set of items
in a list.
通常在使用C预处理器宏的地方您可以试试用声明宏来替代。它们可以在您的crate中定义既可以自己使用,也可以导出让其他crate使用。请注意由于它们必须扩展为完整的表达式语句项目或模式因此某些C预处理器宏的用例将无法替代例如宏展开后是变量名的一部分或list的部分子集。
As with Cargo features, it is worth considering if you even need the macro. In
many cases a regular function is easier to understand and will be inlined to
the same code as a macro. The `#[inline]` and `#[inline(always)]` [attributes]
give you further control over this process, although care should be taken here
as well — the compiler will automatically inline functions from the same crate
where appropriate, so forcing it to do so inappropriately might actually lead
to decreased performance.
与Cargo特性一样是否需要宏也值得考虑。在许多情况下常规函数更易于理解并且内联可以起到与宏相同的效果。 `#[inline]`和`#[inline(always)]` [属性]可以更准确的控制是否内联,此处也应格外小心--编译器会在适当的情况下自动内联同一crate中的函数因此强迫它执行不当操作可能会导致性能下降。
[attributes]: https://doc.rust-lang.org/reference/attributes.html#inline-attribute
[属性]:https://doc.rust-lang.org/reference/attributes.html#inline-attribute
Explaining the entire Rust macro system is out of scope for this tips page, so
you are encouraged to consult the Rust documentation for full details.
解释整个Rust宏系统超出了本书的范围因此建议您查阅Rust文档以获取全部详细信息。
## Build System
## 构建系统
Most Rust crates are built using Cargo (although it is not required). This
takes care of many difficult problems with traditional build systems. However,
you may wish to customise the build process. Cargo provides [`build.rs`
scripts] for this purpose. They are Rust scripts which can interact with the
Cargo build system as required.
大多数Rust crate都是使用Cargo构建的(尽管不是必需的)。这可以解决传统构建系统中的许多难题。但是,您可能希望自定义构建过程。 Cargo为此提供了[build.rs脚本]。它们是Rust脚本可以根据需要与Cargo构建系统进行交互。
[`build.rs` scripts]: https://doc.rust-lang.org/cargo/reference/build-scripts.html
[build.rs脚本]:https://doc.rust-lang.org/cargo/reference/build-scripts.html
Common use cases for build scripts include:
构建脚本的常见用例包括:
* provide build-time information, for example statically embedding the build
date or Git commit hash into your executable
* generate linker scripts at build time depending on selected features or other
logic
* change the Cargo build configuration
* add extra static libraries to link against
* 提供构建时信息例如将构建日期或Git commit哈希静态嵌入到可执行文件中
* 在构建时根据所选功能或其他逻辑生成链接脚本
* 更改Cargo构建配置
* 添加额外的静态链接库
At present there is no support for post-build scripts, which you might
traditionally have used for tasks like automatic generation of binaries from
the build objects or printing build information.
当前,不支持构建后脚本,传统上您可能会使用这些脚本来完成诸如从构建对象自动生成二进制文件或打印构建信息之类的任务。
### Cross-Compiling
### 交叉编译
Using Cargo for your build system also simplifies cross-compiling. In most
cases it suffices to tell Cargo `--target thumbv6m-none-eabi` and find a
suitable executable in `target/thumbv6m-none-eabi/debug/myapp`.
将Cargo用于您的构建系统还可以简化交叉编译。在大多数情况下只需告诉Cargo `--target thumbv6m-none-eabi`,就可以在`target/thumbv6m-none-eabi/debug/myapp`中找到生成的可执行文件。
For platforms not natively supported by Rust, you will need to build `libcore`
for that target yourself. On such platforms, [Xargo] can be used as a stand-in
for Cargo which automatically builds `libcore` for you.
对于Rust不直接支持的平台您将需要自己构建`libcore`。在这样的平台上,[Xargo]可用作Cargo的替代品它会自动为您构建`libcore`。
[Xargo]: https://github.com/japaric/xargo
## Iterators vs Array Access
## 迭代器与数组
In C you are probably used to accessing arrays directly by their index:
在C语言中您可能习惯于通过数组的索引直接访问数组
```c
int16_t arr[16];
@ -172,70 +113,46 @@ for(i=0; i<sizeof(arr)/sizeof(arr[0]); i++) {
}
```
In Rust this is an anti-pattern: indexed access can be slower (as it needs to
be bounds checked) and may prevent various compiler optimisations. This is an
important distinction and worth repeating: Rust will check for out-of-bounds
access on manual array indexing to guarantee memory safety, while C will
happily index outside the array.
在Rust中这是一种反模式索引访问可能较慢(因为需要对边界进行检查)并且可能阻止各种编译器优化。这是一个重要的区别值得重复Rust将检查数组的越界访问以确保内存安全而C将愉快地接受越界访问。
Instead, use iterators:
所以,请使用迭代器:
```rust,ignore
```rust , ignore
let arr = [0u16; 16];
for element in arr.iter() {
process(*element);
}
```
Iterators provide a powerful array of functionality you would have to implement
manually in C, such as chaining, zipping, enumerating, finding the min or max,
summing, and more. Iterator methods can also be chained, giving very readable
data processing code.
迭代器提供了一系列强大的在C中必须手动实现的功能例如链式调用zip枚举查找最小值或最大值求和等等。迭代器方法可以链式调用以提高代码的可读性。
See the [Iterators in the Book] and [Iterator documentation] for more details.
有关更多详细信息,请参见[Rust book中的迭代器]和[迭代器文档]。
[Iterators in the Book]: https://doc.rust-lang.org/book/ch13-02-iterators.html
[Iterator documentation]: https://doc.rust-lang.org/core/iter/trait.Iterator.html
[Rust book中的迭代器]:https://doc.rust-lang.org/book/ch13-02-iterators.html
[迭代器文档]:https://doc.rust-lang.org/core/iter/trait.Iterator.html
## References vs Pointers
## 引用与指针
In Rust, pointers (called [_raw pointers_]) exist but are only used in specific
circumstances, as dereferencing them is always considered `unsafe` -- Rust
cannot provide its usual guarantees about what might be behind the pointer.
在Rust中指针(称为[裸指针])仅在特定情况下使用,因为对它们的解引用始终被认为是“不安全的”(`unsafe`)-Rust无法为其指向的内容提供通常的保证。
[_raw pointers_]: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
[裸指针]:https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
In most cases, we instead use _references_, indicated by the `&` symbol, or
_mutable references_, indicated by `&mut`. References behave similarly to
pointers, in that they can be dereferenced to access the underlying values, but
they are a key part of Rust's ownership system: Rust will strictly enforce that
you may only have one mutable reference _or_ multiple non-mutable references to
the same value at any given time.
在大多数情况下,我们改为使用由`&`符号表示的引用或由`&mut`符号表示的可变引用。引用的行为与指针相似因为它们可以被解引用以访问指向的值但是它们是Rust所有权系统的关键部分Rust严格要求您任何时候只能拥有一个可变引用或多个非可变引用。
In practice this means you have to be more careful about whether you need
mutable access to data: where in C the default is mutable and you must be
explicit about `const`, in Rust the opposite is true.
在实践中这意味着您必须更加小心是否需要对数据进行可变访问在C中默认值是可变的而对于`const`则必须明确在Rust中则相反。
One situation where you might still use raw pointers is interacting directly
with hardware (for example, writing a pointer to a buffer into a DMA peripheral
register), and they are also used under the hood for all peripheral access
crates to allow you to read and write memory-mapped registers.
有一种情况,您可能只能使用裸指针,那就是与硬件直接打交道(例如将指向缓冲区的指针写入DMA外设寄存器). 并且所有外设访问crate底层用得也是裸指针从而可以读写内存映射寄存器。
## Volatile Access
## 易失性(volatile)访问
In C, individual variables may be marked `volatile`, indicating to the compiler
that the value in the variable may change between accesses. Volatile variables
are commonly used in an embedded context for memory-mapped registers.
在C语言中各个变量可以标记为 `volatile`,告诉编译器变量中的值可能在两次访问之间改变。`volatile`变量通常在嵌入式系统中用于内存映射寄存器的访问。
In Rust, instead of marking a variable as `volatile`, we use specific methods
to perform volatile access: [`core::ptr::read_volatile`] and
[`core::ptr::write_volatile`]. These methods take a `*const T` or a `*mut T`
(_raw pointers_, as discussed above) and perform a volatile read or write.
在Rust中我们不是使用volatile来标记变量而是使用特定的方法来实现volatile访问[`core::ptr::read_volatile`]和[`core::ptr::write_volatile`]。这些方法使用`*const T` 或`*mut T`(如上所述的裸指针)作为参数来执行易失性读取或写入。
[`core::ptr::read_volatile`]: https://doc.rust-lang.org/core/ptr/fn.read_volatile.html
[`core::ptr::write_volatile`]: https://doc.rust-lang.org/core/ptr/fn.write_volatile.html
[`core::ptr::read_volatile`]:https://doc.rust-lang.org/core/ptr/fn.read_volatile.html
[`core::ptr::write_volatile`]:https://doc.rust-lang.org/core/ptr/fn.write_volatile.html
For example, in C you might write:
例如在C中您这样写
```c
volatile bool signalled = false;
@ -257,9 +174,9 @@ void driver() {
}
```
The equivalent in Rust would use volatile methods on each access:
Rust中的等效代码是在每次访问时使用volatile方法
```rust,ignore
```rust , ignore
static mut SIGNALLED: bool = false;
#[interrupt]
@ -282,32 +199,19 @@ fn driver() {
}
```
A few things are worth noting in the code sample:
* We can pass `&mut SIGNALLED` into the function requiring `*mut T`, since
`&mut T` automatically converts to a `*mut T` (and the same for `*const T`)
* We need `unsafe` blocks for the `read_volatile`/`write_volatile` methods,
since they are `unsafe` functions. It is the programmer's responsibility
to ensure safe use: see the methods' documentation for further details.
示例代码中需要注意以下几点:
* 我们可以将`&mut SIGNALLED`传递到需要`*mut T`的write_volatile中因为`&mut T`会自动转换为`*mut T`(`&T`自动转换为`*const T`)。
* 对于`read_volatile`/`write_volatile`方法我们需要使用unsafe块因为它们是unsafe函数。确保安全地使用这两个函数是程序员的责任更多详细信息请参见这两个方法的文档。
It is rare to require these functions directly in your code, as they will
usually be taken care of for you by higher-level libraries. For memory mapped
peripherals, the peripheral access crates will implement volatile access
automatically, while for concurrency primitives there are better abstractions
available (see the [Concurrency chapter]).
很少直接在您的代码中需要这些功能因为更高级别的crate通常会为您处理这些功能。对于内存映射的外围设备外围设备访问crate将自动实现易失性访问而对于并发原语则可以使用更好的抽象(请参见[并发章节])。
[Concurrency chapter]: ../concurrency/index.md
[并发章节]:../concurrency/index.md
## Packed and Aligned Types
## 内存对齐
In embedded C it is common to tell the compiler a variable must have a certain
alignment or a struct must be packed rather than aligned, usually to meet
specific hardware or protocol requirements.
在嵌入式C语言中通常告诉编译器变量必须具有一定的对齐方式或者必须打包而不是对齐结构体这通常是为了满足特定的硬件或协议要求。
In Rust this is controlled by the `repr` attribute on a struct or union. The
default representation provides no guarantees of layout, so should not be used
for code that interoperates with hardware or C. The compiler may re-order
struct members or insert padding and the behaviour may change with future
versions of Rust.
在Rust中这是由结构体或联合的`repr`属性控制。默认表示形式不保证布局因此不在与硬件或C互操作的代码中使用。编译器可能会重新排序结构体成员或插入填充而编译器的这些行为可能会在Rust以后的版本中发生改变。
```rust
struct Foo {
@ -325,7 +229,7 @@ fn main() {
// Note ordering has been changed to x, z, y to improve packing.
```
To ensure layouts that are interoperable with C, use `repr(C)`:
为了确保布局可以和C互操作请使用 `repr(C)`
```rust
#[repr(C)]
@ -345,7 +249,7 @@ fn main() {
// `z` is two-byte aligned so a byte of padding exists between `y` and `z`.
```
To ensure a packed representation, use `repr(packed)`:
为了确保紧凑内存布局(一字节对齐),请使用 `repr(packed)`
```rust
#[repr(packed)]
@ -365,10 +269,9 @@ fn main() {
// No padding has been inserted between `y` and `z`, so now `z` is unaligned.
```
Note that using `repr(packed)` also sets the alignment of the type to `1`.
注意,使用`repr(packed)`还将类型的对齐方式设置为一字节。
Finally, to specify a specific alignment, use `repr(align(n))`, where `n` is
the number of bytes to align to (and must be a power of two):
最后,要指定特定的对齐方式,请使用`repr(align(n))`,其中`n`是要对齐的字节数(必须为2的幂)
```rust
#[repr(C)]
@ -392,21 +295,18 @@ fn main() {
// evidenced by the `000` at the end of their addresses.
```
Note we can combine `repr(C)` with `repr(align(n))` to obtain an aligned and
C-compatible layout. It is not permissible to combine `repr(align(n))` with
`repr(packed)`, since `repr(packed)` sets the alignment to `1`. It is also not
permissible for a `repr(packed)` type to contain a `repr(align(n))` type.
For further details on type layouts, refer to the [type layout] chapter of the
Rust Reference.
注意,我们可以将 `repr(C)`与`repr(align(n))`结合使用以获得对齐并兼容C的布局。不允许将`repr(align(n))`与`repr(packed)`结合使用,因为`repr(packed)`将对齐方式设置为1。
[type layout]: https://doc.rust-lang.org/reference/type-layout.html
有关类型布局的更多详细信息请参见Rust参考中的[类型布局]一章。
## Other Resources
[类型布局]:https://doc.rust-lang.org/reference/type-layout.html
* In this book:
* [A little C with your Rust](../interoperability/c-with-rust.md)
* [A little Rust with your C](../interoperability/rust-with-c.md)
* [The Rust Embedded FAQs](https://docs.rust-embedded.org/faq.html)
* [Rust Pointers for C Programmers](http://blahg.josefsipek.net/?p=580)
* [I used to use pointers - now what?](https://github.com/diwic/reffers-rs/blob/master/docs/Pointers.md)
## 其他资源
*本书中的:
* [带有Rust的C](../interoperability/rust-with-c.md)
* [C语言有点锈](../interoperability/rust-with-c.md)
* [嵌入式Rust常见问题解答](https://docs.rust-embedded.org/faq.html)
* [C程序员的Rust指针](http://blahg.josefsipek.net/?p=580)
* [I used to use pointers - now what?](https://github.com/diwic/reffers-rs/blob/master/docs/Pointers.md)

308
src/c-tips/index_en.md Normal file
View File

@ -0,0 +1,308 @@
# Tips for embedded C developers
This chapter collects a variety of tips that might be useful to experienced embedded C developers looking to start writing Rust. It will especially highlight how things you might already be used to in C are different in Rust.
## Preprocessor
In embedded C it is very common to use the preprocessor for a variety of purposes, such as:
* Compile-time selection of code blocks with `#ifdef`
* Compile-time array sizes and computations
* Macros to simplify common patterns (to avoid function call overhead)
In Rust there is no preprocessor, and so many of these use cases are addressed differently. In the rest of this section we cover various alternatives to using the preprocessor.
### Compile-Time Code Selection
The closest match to `#ifdef ... #endif` in Rust are [Cargo features]. These are a little more formal than the C preprocessor: all possible features are explicitly listed per crate, and can only be either on or off. Features are turned on when you list a crate as a dependency, and are additive: if any crate in your dependency tree enables a feature for another crate, that feature will be enabled for all users of that crate.
[Cargo features]: https://doc.rust-lang.org/cargo/reference/manifest.html#the-features-section
For example, you might have a crate which provides a library of signal processing primitives. Each one might take some extra time to compile or declare some large table of constants which you'd like to avoid. You could declare a Cargo feature for each component in your `Cargo.toml`:
```toml
[features]
FIR = []
IIR = []
```
Then, in your code, use `#[cfg(feature="FIR")]` to control what is included.
```rust
/// In your top-level lib.rs
#[cfg(feature="FIR")]
pub mod fir;
#[cfg(feature="IIR")]
pub mod iir;
```
You can similarly include code blocks only if a feature is _not_ enabled, or if any combination of features are or are not enabled.
Additionally, Rust provides a number of automatically-set conditions you can use, such as `target_arch` to select different code based on architecture. For full details of the conditional compilation support, refer to the [conditional compilation] chapter of the Rust reference.
[conditional compilation]: https://doc.rust-lang.org/reference/conditional-compilation.html
The conditional compilation will only apply to the next statement or block. If a block can not be used in the current scope then the `cfg` attribute will need to be used multiple times. It's worth noting that most of the time it is better to simply include all the code and allow the compiler to remove dead code when optimising: it's simpler for you and your users, and in general the compiler will do a good job of removing unused code.
### Compile-Time Sizes and Computation
Rust supports `const fn`, functions which are guaranteed to be evaluable at compile-time and can therefore be used where constants are required, such as in the size of arrays. This can be used alongside features mentioned above, for example:
```rust
const fn array_size() -> usize {
#[cfg(feature="use_more_ram")]
{ 1024 }
#[cfg(not(feature="use_more_ram"))]
{ 128 }
}
static BUF: [u32; array_size()] = [0u32; array_size()];
```
These are new to stable Rust as of 1.31, so documentation is still sparse. The functionality available to `const fn` is also very limited at the time of writing; in future Rust releases it is expected to expand on what is permitted in a `const fn`.
### Macros
Rust provides an extremely powerful [macro system]. While the C preprocessor operates almost directly on the text of your source code, the Rust macro system operates at a higher level. There are two varieties of Rust macro: _macros by example_ and _procedural macros_. The former are simpler and most common; they look like function calls and can expand to a complete expression, statement, item, or pattern. Procedural macros are more complex but permit extremely powerful additions to the Rust language: they can transform arbitrary Rust syntax into new Rust syntax.
[macro system]: https://doc.rust-lang.org/book/ch19-06-macros.html
In general, where you might have used a C preprocessor macro, you probably want to see if a macro-by-example can do the job instead. They can be defined in your crate and easily used by your own crate or exported for other users. Be aware that since they must expand to complete expressions, statements, items, or patterns, some use cases of C preprocessor macros will not work, for example a macro that expands to part of a variable name or an incomplete set of items in a list.
As with Cargo features, it is worth considering if you even need the macro. In many cases a regular function is easier to understand and will be inlined to the same code as a macro. The `#[inline]` and `#[inline(always)]` [attributes] give you further control over this process, although care should be taken here as well — the compiler will automatically inline functions from the same crate where appropriate, so forcing it to do so inappropriately might actually lead to decreased performance.
[attributes]: https://doc.rust-lang.org/reference/attributes.html#inline-attribute
Explaining the entire Rust macro system is out of scope for this tips page, so you are encouraged to consult the Rust documentation for full details.
## Build System
Most Rust crates are built using Cargo (although it is not required). This takes care of many difficult problems with traditional build systems. However, you may wish to customise the build process. Cargo provides [`build.rs` scripts] for this purpose. They are Rust scripts which can interact with the Cargo build system as required.
[`build.rs` scripts]: https://doc.rust-lang.org/cargo/reference/build-scripts.html
Common use cases for build scripts include:
* provide build-time information, for example statically embedding the build date or Git commit hash into your executable
* generate linker scripts at build time depending on selected features or other logic
* change the Cargo build configuration
* add extra static libraries to link against
At present there is no support for post-build scripts, which you might traditionally have used for tasks like automatic generation of binaries from the build objects or printing build information.
### Cross-Compiling
Using Cargo for your build system also simplifies cross-compiling. In most cases it suffices to tell Cargo `--target thumbv6m-none-eabi` and find a suitable executable in `target/thumbv6m-none-eabi/debug/myapp`.
For platforms not natively supported by Rust, you will need to build `libcore` for that target yourself. On such platforms, [Xargo] can be used as a stand-in for Cargo which automatically builds `libcore` for you.
[Xargo]: https://github.com/japaric/xargo
## Iterators vs Array Access
In C you are probably used to accessing arrays directly by their index:
```c
int16_t arr[16];
int i;
for(i=0; i<sizeof(arr)/sizeof(arr[0]); i++) {
process(arr[i]);
}
```
In Rust this is an anti-pattern: indexed access can be slower (as it needs to be bounds checked) and may prevent various compiler optimisations. This is an important distinction and worth repeating: Rust will check for out-of-bounds access on manual array indexing to guarantee memory safety, while C will happily index outside the array.
Instead, use iterators:
```rust , ignore
let arr = [0u16; 16];
for element in arr.iter() {
process(*element);
}
```
Iterators provide a powerful array of functionality you would have to implement manually in C, such as chaining, zipping, enumerating, finding the min or max, summing, and more. Iterator methods can also be chained, giving very readable data processing code.
See the [Iterators in the Book] and [Iterator documentation] for more details.
[Iterators in the Book]: https://doc.rust-lang.org/book/ch13-02-iterators.html
[Iterator documentation]: https://doc.rust-lang.org/core/iter/trait.Iterator.html
## References vs Pointers
In Rust, pointers (called [_raw pointers_]) exist but are only used in specific circumstances, as dereferencing them is always considered `unsafe` -- Rust cannot provide its usual guarantees about what might be behind the pointer.
[_raw pointers_]: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
In most cases, we instead use _references_, indicated by the `&` symbol, or _mutable references_, indicated by `&mut`. References behave similarly to pointers, in that they can be dereferenced to access the underlying values, but they are a key part of Rust's ownership system: Rust will strictly enforce that you may only have one mutable reference _or_ multiple non-mutable references to the same value at any given time.
In practice this means you have to be more careful about whether you need mutable access to data: where in C the default is mutable and you must be explicit about `const`, in Rust the opposite is true.
One situation where you might still use raw pointers is interacting directly with hardware (for example, writing a pointer to a buffer into a DMA peripheral register), and they are also used under the hood for all peripheral access crates to allow you to read and write memory-mapped registers.
## Volatile Access
In C, individual variables may be marked `volatile`, indicating to the compiler that the value in the variable may change between accesses. Volatile variables are commonly used in an embedded context for memory-mapped registers.
In Rust, instead of marking a variable as `volatile`, we use specific methods to perform volatile access: [`core::ptr::read_volatile`] and [`core::ptr::write_volatile`]. These methods take a `*const T` or a `*mut T` (_raw pointers_, as discussed above) and perform a volatile read or write.
[`core::ptr::read_volatile`]: https://doc.rust-lang.org/core/ptr/fn.read_volatile.html
[`core::ptr::write_volatile`]: https://doc.rust-lang.org/core/ptr/fn.write_volatile.html
For example, in C you might write:
```c
volatile bool signalled = false;
void ISR() {
// Signal that the interrupt has occurred
signalled = true;
}
void driver() {
while(true) {
// Sleep until signalled
while(!signalled) { WFI(); }
// Reset signalled indicator
signalled = false;
// Perform some task that was waiting for the interrupt
run_task();
}
}
```
The equivalent in Rust would use volatile methods on each access:
```rust , ignore
static mut SIGNALLED: bool = false;
#[interrupt]
fn ISR() {
// Signal that the interrupt has occurred
// (In real code, you should consider a higher level primitive,
// such as an atomic type).
unsafe { core::ptr::write_volatile(&mut SIGNALLED, true) };
}
fn driver() {
loop {
// Sleep until signalled
while unsafe { !core::ptr::read_volatile(&SIGNALLED) } {}
// Reset signalled indicator
unsafe { core::ptr::write_volatile(&mut SIGNALLED, false) };
// Perform some task that was waiting for the interrupt
run_task();
}
}
```
A few things are worth noting in the code sample:
* We can pass `&mut SIGNALLED` into the function requiring `*mut T`, since `&mut T` automatically converts to a `*mut T` (and the same for `*const T`)
* We need `unsafe` blocks for the `read_volatile`/`write_volatile` methods, since they are `unsafe` functions. It is the programmer's responsibility to ensure safe use: see the methods' documentation for further details.
It is rare to require these functions directly in your code, as they will usually be taken care of for you by higher-level libraries. For memory mapped peripherals, the peripheral access crates will implement volatile access automatically, while for concurrency primitives there are better abstractions available (see the [Concurrency chapter]).
[Concurrency chapter]: ../concurrency/index.md
## Packed and Aligned Types
In embedded C it is common to tell the compiler a variable must have a certain alignment or a struct must be packed rather than aligned, usually to meet specific hardware or protocol requirements.
In Rust this is controlled by the `repr` attribute on a struct or union. The default representation provides no guarantees of layout, so should not be used for code that interoperates with hardware or C. The compiler may re-order struct members or insert padding and the behaviour may change with future versions of Rust.
```rust
struct Foo {
x: u16,
y: u8,
z: u16,
}
fn main() {
let v = Foo { x: 0, y: 0, z: 0 };
println!("{:p} {:p} {:p}", &v.x, &v.y, &v.z);
}
// 0x7ffecb3511d0 0x7ffecb3511d4 0x7ffecb3511d2
// Note ordering has been changed to x, z, y to improve packing.
```
To ensure layouts that are interoperable with C, use `repr(C)`:
```rust
#[repr(C)]
struct Foo {
x: u16,
y: u8,
z: u16,
}
fn main() {
let v = Foo { x: 0, y: 0, z: 0 };
println!("{:p} {:p} {:p}", &v.x, &v.y, &v.z);
}
// 0x7fffd0d84c60 0x7fffd0d84c62 0x7fffd0d84c64
// Ordering is preserved and the layout will not change over time.
// `z` is two-byte aligned so a byte of padding exists between `y` and `z`.
```
To ensure a packed representation, use `repr(packed)`:
```rust
#[repr(packed)]
struct Foo {
x: u16,
y: u8,
z: u16,
}
fn main() {
let v = Foo { x: 0, y: 0, z: 0 };
// Unsafe is required to borrow a field of a packed struct.
unsafe { println!("{:p} {:p} {:p}", &v.x, &v.y, &v.z) };
}
// 0x7ffd33598490 0x7ffd33598492 0x7ffd33598493
// No padding has been inserted between `y` and `z`, so now `z` is unaligned.
```
Note that using `repr(packed)` also sets the alignment of the type to `1`.
Finally, to specify a specific alignment, use `repr(align(n))`, where `n` is the number of bytes to align to (and must be a power of two):
```rust
#[repr(C)]
#[repr(align(4096))]
struct Foo {
x: u16,
y: u8,
z: u16,
}
fn main() {
let v = Foo { x: 0, y: 0, z: 0 };
let u = Foo { x: 0, y: 0, z: 0 };
println!("{:p} {:p} {:p}", &v.x, &v.y, &v.z);
println!("{:p} {:p} {:p}", &u.x, &u.y, &u.z);
}
// 0x7ffec909a000 0x7ffec909a002 0x7ffec909a004
// 0x7ffec909b000 0x7ffec909b002 0x7ffec909b004
// The two instances `u` and `v` have been placed on 4096-byte alignments,
// evidenced by the `000` at the end of their addresses.
```
Note we can combine `repr(C)` with `repr(align(n))` to obtain an aligned and C-compatible layout. It is not permissible to combine `repr(align(n))` with `repr(packed)`, since `repr(packed)` sets the alignment to `1`. It is also not permissible for a `repr(packed)` type to contain a `repr(align(n))` type.
For further details on type layouts, refer to the [type layout] chapter of the Rust Reference.
[type layout]: https://doc.rust-lang.org/reference/type-layout.html
## Other Resources
* In this book:
* [A little C with your Rust]
* [A little Rust with your C](../interoperability/rust-with-c.md)
* [The Rust Embedded FAQs](https://docs.rust-embedded.org/faq.html)
* [Rust Pointers for C Programmers](http://blahg.josefsipek.net/?p=580)
* [I used to use pointers - now what?](https://github.com/diwic/reffers-rs/blob/master/docs/Pointers.md)

View File

@ -1,33 +1,25 @@
# Collections
# 容器
Eventually you'll want to use dynamic data structures (AKA collections) in your
program. `std` provides a set of common collections: [`Vec`], [`String`],
[`HashMap`], etc. All the collections implemented in `std` use a global dynamic
memory allocator (AKA the heap).
最终,您将要在程序中使用动态数据结构(也就是容器)。 `std`提供了一组通用容器:[`Vec`][`String`][`HashMap`]等。在`std`中实现的所有容器都使用了全局动态内存分配器(也称为堆)。
[`Vec`]: https://doc.rust-lang.org/std/vec/struct.Vec.html
[`String`]: https://doc.rust-lang.org/std/string/struct.String.html
[`HashMap`]: https://doc.rust-lang.org/std/collections/struct.HashMap.html
[`Vec`]:https://doc.rust-lang.org/std/vec/struct.Vec.html
[`String`]:https://doc.rust-lang.org/std/string/struct.String.html
[`HashMap`]:https://doc.rust-lang.org/std/collections/struct.HashMap.html
As `core` is, by definition, free of memory allocations these implementations
are not available there, but they can be found in the *unstable* `alloc` crate
that's shipped with the compiler.
`core`本身是没有动态内存分配的,但是编译器自带了一个**unstable**的`alloc` crate支持动态内存分配.
If you need collections, a heap allocated implementation is not your only
option. You can also use *fixed capacity* collections; one such implementation
can be found in the [`heapless`] crate.
如果需要容器,基于堆的实现不是唯一的选择。您还可以使用“固定容量”容器;可以在['heapless`]crate中找到一种这样的实现。
[`heapless`]: https://crates.io/crates/heapless
[`heapless`]:https://crates.io/crates/heapless
In this section, we'll explore and compare these two implementations.
在本节中,我们将探索和比较这两种实现。
## Using `alloc`
## 使用`alloc`
The `alloc` crate is shipped with the standard Rust distribution. To import the
crate you can directly `use` it *without* declaring it as a dependency in your
`Cargo.toml` file.
标准的Rust发行版中已经包含了`alloc`,您可以直接使用它而无需在Cargo.toml文件中将其声明为依赖项。
``` rust,ignore
``` rust , ignore
#![feature(alloc)]
extern crate alloc;
@ -35,18 +27,13 @@ extern crate alloc;
use alloc::vec::Vec;
```
To be able to use any collection you'll first need use the `global_allocator`
attribute to declare the global allocator your program will use. It's required
that the allocator you select implements the [`GlobalAlloc`] trait.
要使用容器,您首先需要使用`global_allocator`属性来声明程序将使用的全局分配器。这个分配器要实现[`GlobalAlloc`]trait。
[`GlobalAlloc`]: https://doc.rust-lang.org/core/alloc/trait.GlobalAlloc.html
[`GlobalAlloc`]:https://doc.rust-lang.org/core/alloc/trait.GlobalAlloc.html
For completeness and to keep this section as self-contained as possible we'll
implement a simple bump pointer allocator and use that as the global allocator.
However, we *strongly* suggest you use a battle tested allocator from crates.io
in your program instead of this allocator.
为了完整起见并保持本节尽可能独立我们将实现一个简单的凹凸指针分配器并将其用作全局分配器。但是我们强烈建议您在程序中使用crates.io上经过充分实战测试的分配器而不要使用此分配器。
``` rust,ignore
``` rust , ignore
// Bump pointer allocator implementation
extern crate cortex_m;
@ -102,11 +89,9 @@ static HEAP: BumpPointerAlloc = BumpPointerAlloc {
};
```
Apart from selecting a global allocator the user will also have to define how
Out Of Memory (OOM) errors are handled using the *unstable*
`alloc_error_handler` attribute.
除了选择全局分配器之外,用户还必须处理内存不足(OOM)错误,这个可以借助**unstable**的`alloc_error_handler`属性。
``` rust,ignore
``` rust , ignore
#![feature(alloc_error_handler)]
use cortex_m::asm;
@ -119,9 +104,10 @@ fn on_oom(_layout: Layout) -> ! {
}
```
Once all that is in place, the user can finally use the collections in `alloc`.
一切就绪后,就以使用`alloc`中的容器了。
```rust,ignore
```rust , ignore
#[entry]
fn main() -> ! {
let mut xs = Vec::new();
@ -135,15 +121,13 @@ fn main() -> ! {
}
```
If you have used the collections in the `std` crate then these will be familiar
as they are exact same implementation.
这些容器与标准库中的容器实现完全一样,你使用起来会觉得非常熟悉.
## Using `heapless`
## 使用`heapless`
`heapless` requires no setup as its collections don't depend on a global memory
allocator. Just `use` its collections and proceed to instantiate them:
`heapless`不需要设置,因为其容器不依赖于全局内存分配器,所以开箱即用:
```rust,ignore
```rust , ignore
extern crate heapless; // v0.4.x
use heapless::Vec;
@ -157,106 +141,53 @@ fn main() -> ! {
assert_eq!(xs.pop(), Some(42));
}
```
您会注意到这些容器与alloc中的两个区别。
You'll note two differences between these collections and the ones in `alloc`.
首先,您必须预先声明容器的容量。 `heapless`容器从不重新分配内存并且具有固定容量;容量大小是容器类型签名的一部分。这里我们声明`xs`是容量有8个元素的Vector。这由类型签名中的`U8`(请参阅​​[`typenum`])指明。
First, you have to declare upfront the capacity of the collection. `heapless`
collections never reallocate and have fixed capacities; this capacity is part of
the type signature of the collection. In this case we have declared that `xs`
has a capacity of 8 elements that is the vector can, at most, hold 8 elements.
This is indicated by the `U8` (see [`typenum`]) in the type signature.
[`typenum`]:https://crates.io/crates/typenum
[`typenum`]: https://crates.io/crates/typenum
其次,`push`方法和许多其他方法都返回`Result`。由于`heapless`容器具有固定的容量,因此将元素插入容器的所有操作都可能会失败。 API通过返回结果`Result`来表明是成功还是失败。相反,`alloc`容器将自己在堆上重新分配以增加其容量。
Second, the `push` method, and many other methods, return a `Result`. Since the
`heapless` collections have fixed capacity all operations that insert elements
into the collection can potentially fail. The API reflects this problem by
returning a `Result` indicating whether the operation succeeded or not. In
contrast, `alloc` collections will reallocate themselves on the heap to increase
their capacity.
从v0.4.x版本开始所有`heapless`容器都内联存储所有元素。这意味着像`let x = heapless::Vec::new();`这样的操作将在栈上分配容器,当然你也可以在`static`变量上甚至在堆上分配容器(`Box<Vec<_, _>>`)。
As of version v0.4.x all `heapless` collections store all their elements inline.
This means that an operation like `let x = heapless::Vec::new();` will allocate
the collection on the stack, but it's also possible to allocate the collection
on a `static` variable, or even on the heap (`Box<Vec<_, _>>`).
## 权衡取舍
## Trade-offs
在堆分配可重定位的容器和固定容量的容器之间进行选择时,请从以下角度考虑.
Keep these in mind when choosing between heap allocated, relocatable collections
and fixed capacity collections.
### 内存不足(OOM)和错误处理
### Out Of Memory and error handling
使用堆分配总是存在内存不足的可能性,并且可能发生在需要增长容器的任何地方:例如,所有`alloc::Vec.push`调用都可能会导致OOM。因此某些操作可能会悄无声息的失败。某些alloc容器公开了`try_reserve`方法这些方法可让您在容器增长时检查潜在的OOM但您需要主动使用它们。
With heap allocations Out Of Memory is always a possibility and can occur in
any place where a collection may need to grow: for example, all
`alloc::Vec.push` invocations can potentially generate an OOM condition. Thus
some operations can *implicitly* fail. Some `alloc` collections expose
`try_reserve` methods that let you check for potential OOM conditions when
growing the collection but you need be proactive about using them.
如果您只使用`heapless`容器并且不在任何其他地方使用内存分配器那么肯定不会发生OOM。取而代之的是您每次都要考虑容器的容量问题。也就是您必须处理所有`Vec.push`之类的方法返回的`Result`。
If you exclusively use `heapless` collections and you don't use a memory
allocator for anything else then an OOM condition is impossible. Instead, you'll
have to deal with collections running out of capacity on a case by case basis.
That is you'll have deal with *all* the `Result`s returned by methods like
`Vec.push`.
直接在`heapless::Vec.push`返回的`Result`上unwrap当然可能会触发OOM错误,但是还有其他更难调试的OOM错误,这是因为你观察到的错误位置可能不是引起问题的实际位置.例如,如果由于其他容器正在发生了内存泄漏(安全的Rust中可能发生内存泄漏)而导致几乎无内存可用,那么即使是`vec.reserve(1)`也会触发OOM。
OOM failures can be harder to debug than say `unwrap`-ing on all `Result`s
returned by `heapless::Vec.push` because the observed location of failure may
*not* match with the location of the cause of the problem. For example, even
`vec.reserve(1)` can trigger an OOM if the allocator is nearly exhausted because
some other collection was leaking memory (memory leaks are possible in safe
Rust).
### Memory usage
### 内存使用情况
Reasoning about memory usage of heap allocated collections is hard because the
capacity of long lived collections can change at runtime. Some operations may
implicitly reallocate the collection increasing its memory usage, and some
collections expose methods like `shrink_to_fit` that can potentially reduce the
memory used by the collection -- ultimately, it's up to the allocator to decide
whether to actually shrink the memory allocation or not. Additionally, the
allocator may have to deal with memory fragmentation which can increase the
*apparent* memory usage.
很难对堆分配的容器的内存使用进行准确判断,因为使用周期很长的容器的容量可以在运行时更改。有些操作可能会隐式地重定位容器,从而增加其内存使用量,而某些容器会提供`shrink_to_fit`之类的方法,这些方法可能会减少容器使用的内存,甚至可能由分配器决定是否实际缩小内存分配。此外,分配器可能必须处理内存碎片,这可能会增加表面上的内存占用。
On the other hand if you exclusively use fixed capacity collections, store
most of them in `static` variables and set a maximum size for the call stack
then the linker will detect if you try to use more memory than what's physically
available.
另一方面,如果您使用固定容量容器,将它们中的大多数存储在静态变量中,并设置栈的最大大小,那么链接器会检测到您使用的内存是否超过实际可用的内存。
Furthermore, fixed capacity collections allocated on the stack will be reported
by [`-Z emit-stack-sizes`] flag which means that tools that analyze stack usage
(like [`stack-sizes`]) will include them in their analysis.
此外,分配在栈上的固定容量容器的大小可以通过[`-Z emit-stack-sizes`]参数来报告,分析栈使用情况的工具(例如[`stack-sizes`])会将此信息包含在分析结果中。
[`-Z emit-stack-sizes`]: https://doc.rust-lang.org/beta/unstable-book/compiler-flags/emit-stack-sizes.html
[`stack-sizes`]: https://crates.io/crates/stack-sizes
[`-Z emit-stack-sizes`]:https://doc.rust-lang.org/beta/unstable-book/compiler-flags/emit-stack-sizes.html
[`stack-sizes`]:https://crates.io/crates/stack-sizes
However, fixed capacity collections can *not* be shrunk which can result in
lower load factors (the ratio between the size of the collection and its
capacity) than what relocatable collections can achieve.
但是,固定容量的容器不能缩小,这可能导致其负载因子(容器实际大小与其容量之间的比率)低于堆分配的可重定位容器。
### Worst Case Execution Time (WCET)
### 最坏情况执行时间(WCET)
If are building time sensitive applications or hard real time applications then
you care, maybe a lot, about the worst case execution time of the different
parts of your program.
如果要构建对时间敏感的应用程序或硬实时应用程序,那么您可能会担心程序的不同部分在最坏情况下的执行时间。
The `alloc` collections can reallocate so the WCET of operations that may grow
the collection will also include the time it takes to reallocate the collection,
which itself depends on the *runtime* capacity of the collection. This makes it
hard to determine the WCET of, for example, the `alloc::Vec.push` operation as
it depends on both the allocator being used and its runtime capacity.
`alloc`容器可能会重新分配内存因此容器增长操作的WCET也将包括重新分配容器所需的时间而这个时间取决于容器的运行时容量,这就很难确定WCET是多少. 例如`alloc::Vec.push`操作所用时间既依赖于所用的分配器实现算法也依赖于容器当时的容量。
On the other hand fixed capacity collections never reallocate so all operations
have a predictable execution time. For example, `heapless::Vec.push` executes in
constant time.
相比之下,固定容量容器永远不会重新分配内存,因此所有操作都具有可预测的执行时间。例如,`heapless::Vec.push`将在固定时间内执行。
### Ease of use
### 使用方便性
`alloc` requires setting up a global allocator whereas `heapless` does not.
However, `heapless` requires you to pick the capacity of each collection that
you instantiate.
`alloc`需要设置全局分配器,而 `heapless` 则不需要。但是 `heapless` 要求您在实例化时确定每个容器的容量。
The `alloc` API will be familiar to virtually every Rust developer. The
`heapless` API tries to closely mimic the `alloc` API but it will never be
exactly the same due to its explicit error handling -- some developers may feel
the explicit error handling is excessive or too cumbersome.
每个Rust开发人员都熟悉 `alloc` API。 `heapless` API试图尽可能地模仿`alloc`,但由于其显式错误处理,他们永远不会完全相同-一些开发人员可能会觉得显式错误处理过于繁琐。

190
src/collections/index_en.md Normal file
View File

@ -0,0 +1,190 @@
# Collections
Eventually you'll want to use dynamic data structures (AKA collections) in your program. `std` provides a set of common collections: [`Vec`], [`String`], [`HashMap`], etc. All the collections implemented in `std` use a global dynamic memory allocator (AKA the heap).
[`Vec`]: https://doc.rust-lang.org/std/vec/struct.Vec.html
[`String`]: https://doc.rust-lang.org/std/string/struct.String.html
[`HashMap`]: https://doc.rust-lang.org/std/collections/struct.HashMap.html
As `core` is, by definition, free of memory allocations these implementations are not available there, but they can be found in the *unstable* `alloc` crate that's shipped with the compiler.
If you need collections, a heap allocated implementation is not your only option. You can also use *fixed capacity* collections; one such implementation can be found in the [`heapless`] crate.
[`heapless`]: https://crates.io/crates/heapless
In this section, we'll explore and compare these two implementations.
## Using `alloc`
The `alloc` crate is shipped with the standard Rust distribution. To import the crate you can directly `use` it *without* declaring it as a dependency in your `Cargo.toml` file.
``` rust , ignore
#![feature(alloc)]
extern crate alloc;
use alloc::vec::Vec;
```
To be able to use any collection you'll first need use the `global_allocator` attribute to declare the global allocator your program will use. It's required that the allocator you select implements the [`GlobalAlloc`] trait.
[`GlobalAlloc`]: https://doc.rust-lang.org/core/alloc/trait.GlobalAlloc.html
For completeness and to keep this section as self-contained as possible we'll implement a simple bump pointer allocator and use that as the global allocator. However, we *strongly* suggest you use a battle tested allocator from crates.io in your program instead of this allocator.
``` rust , ignore
// Bump pointer allocator implementation
extern crate cortex_m;
use core::alloc::GlobalAlloc;
use core::ptr;
use cortex_m::interrupt;
// Bump pointer allocator for *single* core systems
struct BumpPointerAlloc {
head: UnsafeCell<usize>,
end: usize,
}
unsafe impl Sync for BumpPointerAlloc {}
unsafe impl GlobalAlloc for BumpPointerAlloc {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
// `interrupt::free` is a critical section that makes our allocator safe
// to use from within interrupts
interrupt::free(|_| {
let head = self.head.get();
let size = layout.size();
let align = layout.align();
let align_mask = !(align - 1);
// move start up to the next alignment boundary
let start = (*head + align - 1) & align_mask;
if start + size > self.end {
// a null pointer signal an Out Of Memory condition
ptr::null_mut()
} else {
*head = start + size;
start as *mut u8
}
})
}
unsafe fn dealloc(&self, _: *mut u8, _: Layout) {
// this allocator never deallocates memory
}
}
// Declaration of the global memory allocator
// NOTE the user must ensure that the memory region `[0x2000_0100, 0x2000_0200]`
// is not used by other parts of the program
#[global_allocator]
static HEAP: BumpPointerAlloc = BumpPointerAlloc {
head: UnsafeCell::new(0x2000_0100),
end: 0x2000_0200,
};
```
Apart from selecting a global allocator the user will also have to define how Out Of Memory (OOM) errors are handled using the *unstable* `alloc_error_handler` attribute.
``` rust , ignore
#![feature(alloc_error_handler)]
use cortex_m::asm;
#[alloc_error_handler]
fn on_oom(_layout: Layout) -> ! {
asm::bkpt();
loop {}
}
```
Once all that is in place, the user can finally use the collections in `alloc`.
```rust , ignore
#[entry]
fn main() -> ! {
let mut xs = Vec::new();
xs.push(42);
assert!(xs.pop(), Some(42));
loop {
// ..
}
}
```
If you have used the collections in the `std` crate then these will be familiar as they are exact same implementation.
## Using `heapless`
`heapless` requires no setup as its collections don't depend on a global memory allocator. Just `use` its collections and proceed to instantiate them:
```rust , ignore
extern crate heapless; // v0.4.x
use heapless::Vec;
use heapless::consts::*;
#[entry]
fn main() -> ! {
let mut xs: Vec<_, U8> = Vec::new();
xs.push(42).unwrap();
assert_eq!(xs.pop(), Some(42));
}
```
You'll note two differences between these collections and the ones in `alloc`.
First, you have to declare upfront the capacity of the collection. `heapless` collections never reallocate and have fixed capacities; this capacity is part of the type signature of the collection. In this case we have declared that `xs` has a capacity of 8 elements that is the vector can, at most, hold 8 elements. This is indicated by the `U8` (see [`typenum`]) in the type signature.
[`typenum`]: https://crates.io/crates/typenum
Second, the `push` method, and many other methods, return a `Result`. Since the `heapless` collections have fixed capacity all operations that insert elements into the collection can potentially fail. The API reflects this problem by returning a `Result` indicating whether the operation succeeded or not. In contrast, `alloc` collections will reallocate themselves on the heap to increase their capacity.
As of version v0.4.x all `heapless` collections store all their elements inline. This means that an operation like `let x = heapless::Vec::new();` will allocate the collection on the stack, but it's also possible to allocate the collection on a `static` variable, or even on the heap (`Box<Vec<_, _>>`).
## Trade-offs
Keep these in mind when choosing between heap allocated, relocatable collections and fixed capacity collections.
### Out Of Memory and error handling
With heap allocations Out Of Memory is always a possibility and can occur in any place where a collection may need to grow: for example, all `alloc::Vec.push` invocations can potentially generate an OOM condition. Thus some operations can *implicitly* fail. Some `alloc` collections expose `try_reserve` methods that let you check for potential OOM conditions when growing the collection but you need be proactive about using them.
If you exclusively use `heapless` collections and you don't use a memory allocator for anything else then an OOM condition is impossible. Instead, you'll have to deal with collections running out of capacity on a case by case basis. That is you'll have deal with *all* the `Result`s returned by methods like `Vec.push`.
OOM failures can be harder to debug than say `unwrap`-ing on all `Result`s returned by `heapless::Vec.push` because the observed location of failure may *not* match with the location of the cause of the problem. For example, even `vec.reserve(1)` can trigger an OOM if the allocator is nearly exhausted because some other collection was leaking memory (memory leaks are possible in safe Rust).
### Memory usage
Reasoning about memory usage of heap allocated collections is hard because the capacity of long lived collections can change at runtime. Some operations may implicitly reallocate the collection increasing its memory usage, and some collections expose methods like `shrink_to_fit` that can potentially reduce the memory used by the collection -- ultimately, it's up to the allocator to decide whether to actually shrink the memory allocation or not. Additionally, the allocator may have to deal with memory fragmentation which can increase the *apparent* memory usage.
On the other hand if you exclusively use fixed capacity collections, store most of them in `static` variables and set a maximum size for the call stack then the linker will detect if you try to use more memory than what's physically available.
Furthermore, fixed capacity collections allocated on the stack will be reported by [`-Z emit-stack-sizes`] flag which means that tools that analyze stack usage (like [`stack-sizes`]) will include them in their analysis.
[`-Z emit-stack-sizes`]: https://doc.rust-lang.org/beta/unstable-book/compiler-flags/emit-stack-sizes.html
[`stack-sizes`]: https://crates.io/crates/stack-sizes
However, fixed capacity collections can *not* be shrunk which can result in lower load factors (the ratio between the size of the collection and its capacity) than what relocatable collections can achieve.
### Worst Case Execution Time (WCET)
If are building time sensitive applications or hard real time applications then you care, maybe a lot, about the worst case execution time of the different parts of your program.
The `alloc` collections can reallocate so the WCET of operations that may grow the collection will also include the time it takes to reallocate the collection, which itself depends on the *runtime* capacity of the collection. This makes it hard to determine the WCET of, for example, the `alloc::Vec.push` operation as it depends on both the allocator being used and its runtime capacity.
On the other hand fixed capacity collections never reallocate so all operations have a predictable execution time. For example, `heapless::Vec.push` executes in constant time.
### Ease of use
`alloc` requires setting up a global allocator whereas `heapless` does not. However, `heapless` requires you to pick the capacity of each collection that you instantiate.
The `alloc` API will be familiar to virtually every Rust developer. The `heapless` API tries to closely mimic the `alloc` API but it will never be exactly the same due to its explicit error handling -- some developers may feel the explicit error handling is excessive or too cumbersome.

View File

@ -1,28 +1,19 @@
# Concurrency
# 并发
Concurrency happens whenever different parts of your program might execute
at different times or out of order. In an embedded context, this includes:
只要程序的不同部分可能在不同时间执行或者乱序执行就存在并发。在嵌入式上下文中,这包括:
* interrupt handlers, which run whenever the associated interrupt happens,
* various forms of multithreading, where your microprocessor regularly swaps
between parts of your program,
* and in some systems, multiple-core microprocessors, where each core can be
independently running a different part of your program at the same time.
* 中断处理程序,每当相关中断发生时运行,
* 多种形式的多线程,您的微处理器定期在程序的各个部分之间进行交换,
* 在某些系统中是多核微处理器,其中每个核可以同时独立运行程序的不同部分。
Since many embedded programs need to deal with interrupts, concurrency will
usually come up sooner or later, and it's also where many subtle and difficult
bugs can occur. Luckily, Rust provides a number of abstractions and safety
guarantees to help us write correct code.
由于许多嵌入式程序需要处理中断因此并发通常迟早会出现这也是可能会发生许多细微而困难的错误的地方。幸运的是Rust提供了许多抽象和安全保证来帮助我们编写正确的代码。
## No Concurrency
## 没有并发
The simplest concurrency for an embedded program is no concurrency: your
software consists of a single main loop which just keeps running, and there
are no interrupts at all. Sometimes this is perfectly suited to the problem
at hand! Typically your loop will read some inputs, perform some processing,
and write some outputs.
嵌入式程序最简单的并发就是没有并发:您的软件由一个主循环组成,根本没有中断。有时,这非常适合现实情况!通常循环读取一些输入,执行一些处理,然后进行输出。
```rust,ignore
```rust , ignore
#[entry]
fn main() {
let peripherals = setup_peripherals();
@ -34,31 +25,20 @@ fn main() {
}
```
Since there's no concurrency, there's no need to worry about sharing data
between parts of your program or synchronising access to peripherals. If
you can get away with such a simple approach this can be a great solution.
## Global Mutable Data
由于没有并发性,因此无需担心在程序各部分之间共享数据或对外设的同步访问。如果您可以采用这种简单的方法,那将是一个很好的解决方案。
Unlike non-embedded Rust, we will not usually have the luxury of creating
heap allocations and passing references to that data into a newly-created
thread. Instead our interrupt handlers might be called at any time and must
know how to access whatever shared memory we are using. At the lowest level,
this means we must have _statically allocated_ mutable memory, which
both the interrupt handler and the main code can refer to.
## 全局可变数据
In Rust, such [`static mut`] variables are always unsafe to read or write,
because without taking special care, you might trigger a race condition,
where your access to the variable is interrupted halfway through by an
interrupt which also accesses that variable.
与非嵌入式Rust不同我们通常不会奢侈地使用堆分配内存并将对该数据的引用传递到新创建的线程中。相反我们的中断处理程序可能随时被调用并且必须知道如何访问我们正在使用的任何共享内存。这意味着我们在底层必须具有“静态分配”的可变内存中断处理程序和主代码都可以引用该可变内存。
[`static mut`]: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#accessing-or-modifying-a-mutable-static-variable
在Rust中此类['static mut`]变量读写始终是不安全的,因为如果不特别注意,您可能会触发竞争条件,其中对变量的访问可能会随时被中断,而相应中断处理程序同样需要访问该变量。
For an example of how this behaviour can cause subtle errors in your code,
consider an embedded program which counts rising edges of some input signal
in each one-second period (a frequency counter):
[`static mut`]:https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#accessing-or-modifying-a-mutable-static-variable
```rust,ignore
让我们来看一个例子,来看看此行为是如何导致代码出现细微的错误. 请考虑一个嵌入式程序,该程序统计一秒内(频率计数器)某些输入信号的上升沿出现的次数:
```rust , ignore
static mut COUNTER: u32 = 0;
#[entry]
@ -81,25 +61,13 @@ fn timer() {
}
```
Each second, the timer interrupt sets the counter back to 0. Meanwhile, the
main loop continually measures the signal, and incremements the counter when
it sees a change from low to high. We've had to use `unsafe` to access
`COUNTER`, as it's `static mut`, and that means we're promising the compiler
we won't cause any undefined behaviour. Can you spot the race condition? The
increment on `COUNTER` is _not_ guaranteed to be atomic — in fact, on most
embedded platforms, it will be split into a load, then the increment, then
a store. If the interrupt fired after the load but before the store, the
reset back to 0 would be ignored after the interrupt returns — and we would
count twice as many transitions for that period.
定时器中断每秒都会将计数器重置为0。与此同时主循环不断地测量信号并在看到从低到高的变化时增加计数器。我们必须使用`unsafe`来访问`COUNTER`,因为它是`static mut`,使用`unsafe`意味着我们向编译器保证不会引起任何未定义的行为。你能发现其中的竞争问题吗?不能保证`COUNTER`上的增加是原子的-实际上,在大多数嵌入式平台上,它将被分为读入,增加,然后是写回。如果在读入之后但在写回之前发生了中断,则在中断返回后,重置为0的操作被忽略,我们将计数两倍的转换次数。
## Critical Sections
## 临界区
So, what can we do about data races? A simple approach is to use _critical
sections_, a context where interrupts are disabled. By wrapping the access to
`COUNTER` in `main` in a critical section, we can be sure the timer interrupt
will not fire until we're finished incrementing `COUNTER`:
那么,我们该如何处理数据竞赛?一种简单的方法是使用“临界区”,在关键部分中中断是被禁用的。通过将`main`函数中对`COUNTER`的访问部分放在临界区中,我们可以确保在完成递增`COUNTER`之前不会被计时器中断:
```rust,ignore
```rust , ignore
static mut COUNTER: u32 = 0;
#[entry]
@ -124,40 +92,23 @@ fn timer() {
}
```
In this example we use `cortex_m::interrupt::free`, but other platforms will
have similar mechanisms for executing code in a critical section. This is also
the same as disabling interrupts, running some code, and then re-enabling
interrupts.
在这个例子中,我们使用`cortex_m::interrupt::free`,其他平台也有类似的机制。这也与禁用中断,运行一些代码然后重新启用中断相同。
Note we didn't need to put a critical section inside the timer interrupt,
for two reasons:
请注意,由于两个原因,我们不需要在计时器中断中放置临界区:
* 向`COUNTER`写入0不会受到竞态问题的影响因为我们没有读它
* 它永远不会被`main`线程打断
* Writing 0 to `COUNTER` can't be affected by a race since we don't read it
* It will never be interrupted by the `main` thread anyway
如果`COUNTER`被多个可能相互抢占的中断处理程序共享,则每个中断处理程序也可能需要一个临界区。
If `COUNTER` was being shared by multiple interrupt handlers that might
_preempt_ each other, then each one might require a critical section as well.
这解决了我们的迫在眉睫的问题,但是我们仍然需要编写很多不安全的代码,这些代码我们需要仔细检查,并且可能会不必要地使用临界区。由于每个临界区都会暂时中止中断处理,因此会产生一些额外的代码,并产生更高的中断延迟和抖动(中断可能需要更长的时间才能被处理,并且处理之前的等待时间会更加不确定)。这是否有问题取决于您的系统,但总的来说,我们希望避免这种情况。
This solves our immediate problem, but we're still left writing a lot of unsafe code which we need to carefully reason about, and we might be using critical sections needlessly. Since each critical section temporarily pauses interrupt processing, there is an associated cost of some extra code size and higher interrupt latency and jitter (interrupts may take longer to be processed, and the time until they are processed will be more variable). Whether this is a problem depends on your system, but in general we'd like to avoid it.
值得注意的是,尽管临界区保证不会触发任何中断,但它不能在多核系统上提供排他性保证!另一个内核可能很高兴访问与您的内核相同的内存,即使没有中断也是如此。如果使用多个内核,则将需要更强大的同步原语。
It's worth noting that while a critical section guarantees no interrupts will
fire, it does not provide an exclusivity guarantee on multi-core systems! The
other core could be happily accessing the same memory as your core, even
without interrupts. You will need stronger synchronisation primitives if you
are using multiple cores.
## 原子访问
## Atomic Access
在某些平台上,可以使用原子指令,这些指令保证了读-修改-写操作(CAS compare and set)是原子的。在Cortex-M架构中`thumbv6`(Cortex-M0)不提供原子指令,而`thumbv7`(Cortex-M3及更高版本)则提供原子指令。这些指令可以避免禁用所有中断:我们可以尝试递增,它在大多数时间都会成功,但是如果被中断,它将自动重试整个递增操作。即使在多个内核之间,这些原子操作也是安全的。
On some platforms, atomic instructions are available, which provide guarantees
about read-modify-write operations. Specifically for Cortex-M, `thumbv6`
(Cortex-M0) does not provide atomic instructions, while `thumbv7` (Cortex-M3
and above) do. These instructions give an alternative to the heavy-handed
disabling of all interrupts: we can attempt the increment, it will succeed most
of the time, but if it was interrupted it will automatically retry the entire
increment operation. These atomic operations are safe even across multiple
cores.
```rust,ignore
```rust , ignore
use core::sync::atomic::{AtomicUsize, Ordering};
static COUNTER: AtomicUsize = AtomicUsize::new(0);
@ -183,36 +134,23 @@ fn timer() {
}
```
This time `COUNTER` is a safe `static` variable. Thanks to the `AtomicUsize`
type `COUNTER` can be safely modified from both the interrupt handler and the
main thread without disabling interrupts. When possible, this is a better
solution — but it may not be supported on your platform.
这次COUNTER是一个安全的static变量。由于使用了`AtomicUsize`类型,可以从中断处理程序和主线程安全地修改`COUNTER',而无需禁用中断。如果可能,这是一个更好的解决方案-但您的平台可能不支持它。
A note on [`Ordering`]: this affects how the compiler and hardware may reorder
instructions, and also has consequences on cache visibility. Assuming that the
target is a single core platform `Relaxed` is sufficient and the most efficient
choice in this particular case. Stricter ordering will cause the compiler to
emit memory barriers around the atomic operations; depending on what you're
using atomics for you may or may not need this! The precise details of the
atomic model are complicated and best described elsewhere.
关于[`Ordering]的注释:这会影响编译器和硬件如何对指令进行重新排序,并对缓存可见性产生影响。假设目标是单核心平台,那么 `Relaxed`就足够了,并且在这种情况下是最有效的选择。更严格的顺序将导致编译器在原子操作前后发出内存屏障。取决于您正在使用原子操作的种类,您可能需要也可能不需要更严格的顺序!原子模型的精确细节非常复杂,在其他地方有最好的描述。
For more details on atomics and ordering, see the [nomicon].
有关原子操作和顺序的更多详细信息,请参见[nomicon]。
[`Ordering`]: https://doc.rust-lang.org/core/sync/atomic/enum.Ordering.html
[nomicon]: https://doc.rust-lang.org/nomicon/atomics.html
[`Ordering`]:https//doc.rust-lang.org/core/sync/atomic/enum.Ordering.html
[nomicon]:https//doc.rust-lang.org/nomicon/atomics.html
## Abstractions, Send, and Sync
## 抽象Send和Sync
None of the above solutions are especially satisfactory. They require `unsafe`
blocks which must be very carefully checked and are not ergonomic. Surely we
can do better in Rust!
上述解决方案都不是特别令人满意。他们要求使用 `unsafe`代码(todo atomic方案明明不需要啊?!)这些代码必须非常仔细地检查并且不符合人体工程学。当然我们可以在Rust中做得更好
We can abstract our counter into a safe interface which can be safely used
anywhere else in our code. For this example we'll use the critical-section
counter, but you could do something very similar with atomics.
我们可以将计数器抽象为一个安全的接口,该接口可以在代码中的其他位置安全地使用。在此示例中,我们将使用临界区计数器,您仍然可以执行类似原子操作的操作。
```rust,ignore
```rust , ignore
use core::cell::UnsafeCell;
use cortex_m::interrupt;
@ -271,73 +209,36 @@ fn timer() {
}
```
We've moved our `unsafe` code to inside our carefully-planned abstraction,
and now our appplication code does not contain any `unsafe` blocks.
This design requires the application pass a `CriticalSection` token in:
these tokens are only safely generated by `interrupt::free`, so by requiring
one be passed in, we ensure we are operating inside a critical section, without
having to actually do the lock ourselves. This guarantee is provided statically
by the compiler: there won't be any runtime overhead associated with `cs`.
If we had multiple counters, they could all be given the same `cs`, without
requiring multiple nested critical sections.
我们已经将“不安全”代码移到了经过精心计划的抽象内部,现在,我们的应用程序代码不包含任何“不安全”代码。
This also brings up an important topic for concurrency in Rust: the
[`Send` and `Sync`] traits. To summarise the Rust book, a type is Send
when it can safely be moved to another thread, while it is Sync when
it can be safely shared between multiple threads. In an embedded context,
we consider interrupts to be executing in a separate thread to the application
code, so variables accessed by both an interrupt and the main code must be
Sync.
这种设计要求应用程序在其中传递一个“ CriticalSection”令牌这些令牌仅由`interrupt::free`安全地生成,通过传递一个令牌,我们确保我们在临界区内进行操作,而不必实际执行锁定。编译器静态地保证与`cs`相关操作没有任何运行时开销。如果我们有多个计数器,可以传递给它们相同的`cs`,而无需多个嵌套的临界区。
[`Send` and `Sync`]: https://doc.rust-lang.org/nomicon/send-and-sync.html
这也引出了Rust并发中的一个重要的话题[`Send`和`Sync`] trait。总结一下当一个类型可以安全地将其移动到另一个线程时它满足Send当一个类型可以在多个线程之间安全地只读地共享时它满足Sync。在嵌入式上下文中我们认为中断是在与应用程序代码不同的线程中执行的因此由中断和主程序代码访问的变量必须为Sync。
For most types in Rust, both of these traits are automatically derived for you
by the compiler. However, because `CSCounter` contains an [`UnsafeCell`], it is
not Sync, and therefore we could not make a `static CSCounter`: `static`
variables _must_ be Sync, since they can be accessed by multiple threads.
[`Send`和`Sync`]:https://doc.rust-lang.org/nomicon/send-and-sync.html
[`UnsafeCell`]: https://doc.rust-lang.org/core/cell/struct.UnsafeCell.html
对于Rust中的大多数类型这两个trait都是由编译器自动为您生成的。但是由于`CSCounter`包含[`UnsafeCell`],因此它不是`Sync`的,因此我们不能声明`static CSCounter``static`变量必须是Sync的因为它们可能被多个线程访问。
To tell the compiler we have taken care that the `CSCounter` is in fact safe
to share between threads, we implement the Sync trait explicitly. As with the
previous use of critical sections, this is only safe on single-core platforms:
with multiple cores you would need to go to greater lengths to ensure safety.
[`UnsafeCell`]:https://doc.rust-lang.org/core/cell/struct.UnsafeCell.html
## Mutexes
为了告诉编译器我们的`CSCounter`实际上可以安全地在线程之间共享,我们明确实现了`Sync` trait。与以前使用临界区一样这仅在单核平台上才是安全的对于多核您将需要做更多的工作才能确保安全。
We've created a useful abstraction specific to our counter problem, but
there are many common abstractions used for concurrency.
## 互斥锁(Mutex)
One such _synchronisation primitive_ is a mutex, short for mutual exclusion.
These constructs ensure exclusive access to a variable, such as our counter. A
thread can attempt to _lock_ (or _acquire_) the mutex, and either succeeds
immediately, or blocks waiting for the lock to be acquired, or returns an error
that the mutex could not be locked. While that thread holds the lock, it is
granted access to the protected data. When the thread is done, it _unlocks_ (or
_releases_) the mutex, allowing another thread to lock it. In Rust, we would
usually implement the unlock using the [`Drop`] trait to ensure it is always
released when the mutex goes out of scope.
我们已经针对计数器问题创建了一个有用的抽象,但是针对并发问题,有更多常见的抽象。
[`Drop`]: https://doc.rust-lang.org/core/ops/trait.Drop.html
一种这样的“同步原语”是互斥锁(mutex: mutual exclusion)。互斥锁确保对变量(例如我们的计数器)的独占访问。线程可以尝试执行互斥锁的_lock_(或_acquire_)结果可能是立即成功获取到锁或者阻塞等待直到获取到锁或者因为互斥锁无法锁定而返回错误。当该线程持有锁时它被授予对受保护数据的访问权限。访问完成后它将_unlocks_(或_releases_)互斥锁从而允许另一个线程将其锁定。在Rust中我们通常会使用[`Drop`]trait来实现解锁以确保在互斥锁超出范围时始终将其释放。
Using a mutex with interrupt handlers can be tricky: it is not normally
acceptable for the interrupt handler to block, and it would be especially
disastrous for it to block waiting for the main thread to release a lock,
since we would then _deadlock_ (the main thread will never release the lock
because execution stays in the interrupt handler). Deadlocking is not
considered unsafe: it is possible even in safe Rust.
[`Drop`]:https://doc.rust-lang.org/core/ops/trait.Drop.html
To avoid this behaviour entirely, we could implement a mutex which requires
a critical section to lock, just like our counter example. So long as the
critical section must last as long as the lock, we can be sure we have
exclusive access to the wrapped variable without even needing to track
the lock/unlock state of the mutex.
将互斥锁与中断处理程序一起使用可能会很棘手:中断处理程序通常无法接受阻塞,并且在中断中阻塞等待主线程释放锁尤其会造成灾难性的后果,因为这样我们就会死锁(线程永远不会释放锁,因为中断处理程序没有返回)。死锁并不被认为是不安全的即使在安全的Rust中也有可能。
This is in fact done for us in the `cortex_m` crate! We could have written
our counter using it:
为了完全避免这种行为,我们可以实现一个互斥锁,该互斥锁需要一个临界区进行锁定,就像前面的计数器示例一样。只要临界区必须持续与锁定一样长的时间,我们就可以确保对包装变量的独占访问权,甚至无需跟踪互斥锁的锁定/解锁状态。
```rust,ignore
实际上, `cortex_m` crate已经帮我们做好了我们可以使用它来编写计数器
```rust , ignore
use core::cell::Cell;
use cortex_m::interrupt::Mutex;
@ -364,50 +265,25 @@ fn timer() {
}
```
We're now using [`Cell`], which along with its sibling `RefCell` is used to
provide safe interior mutability. We've already seen `UnsafeCell` which is
the bottom layer of interior mutability in Rust: it allows you to obtain
multiple mutable references to its value, but only with unsafe code. A `Cell`
is like an `UnsafeCell` but it provides a safe interface: it only permits
taking a copy of the current value or replacing it, not taking a reference,
and since it is not Sync, it cannot be shared between threads. These
constraints mean it's safe to use, but we couldn't use it directly in a
`static` variable as a `static` must be Sync.
我们现在使用的是[`Cell`],它与`RefCell`一样用于提供安全的内部可变性。我们已经看到过`UnsafeCell`它是Rust中内部可变性的底层它允许您获取对其包括的值的多个可变引用但只能使用不安全的代码。一个`Cell`就像一个`UnsafeCell`一样但是它提供了一个安全的接口它只允许获取当前值的副本或替换当前值而获取不到引用并且由于它不满足Sync因此不能在线程之间共享。这些限制意味着可以安全使用但是我们不能直接在``static`变量中使用它,因为`static`必须为Sync。
[`Cell`]: https://doc.rust-lang.org/core/cell/struct.Cell.html
[`Cell`]:https//doc.rust-lang.org/core/cell/struct.Cell.html
So why does the example above work? The `Mutex<T>` implements Sync for any
`T` which is Send — such as a `Cell`. It can do this safely because it only
gives access to its contents during a critical section. We're therefore able
to get a safe counter with no unsafe code at all!
那么,为什么上面的示例起作用? `Mutex <T>`对要任何实现了`Send`的`T`(比如这里的`Cell`)都实现了`Sync`。它之所以安全,是因为它仅在临界区内允许访问其内容。因此,我们可以实现一个没有任何不安全代码的安全计数器!
This is great for simple types like the `u32` of our counter, but what about
more complex types which are not Copy? An extremely common example in an
embedded context is a peripheral struct, which generally are not Copy.
For that we can turn to `RefCell`.
这对于像`u32`这样的简单类型非常有用,但是对于没有实现`Copy`的更复杂类型呢?在嵌入式上下文中,一个非常常见的示例是外设结构体,他通常没有实现`Copy`。针对这种,我们可以使用`RefCell`。
## Sharing Peripherals
## 共享外设
Device crates generated using `svd2rust` and similar abstractions provide
safe access to peripherals by enforcing that only one instance of the
peripheral struct can exist at a time. This ensures safety, but makes it
difficult to access a peripheral from both the main thread and an interrupt
handler.
通过强制一次只能存在一个外设实例,使用`svd2rust`生成的`Device` crate和类似抽象提供了对外设的安全访问。这样虽然安全但是很难同时从主线程和中断处理程序访问外围设备。
To safely share peripheral access, we can use the `Mutex` we saw before. We'll
also need to use [`RefCell`], which uses a runtime check to ensure only one
reference to a peripheral is given out at a time. This has more overhead than
the plain `Cell`, but since we are giving out references rather than copies,
we must be sure only one exists at a time.
为了安全地共享外围设备访问权限,我们可以使用我们刚刚介绍的`Mutex`。我们还需要使用[`RefCell`]`RefCell`通过运行时检查来确保一次仅给出一个对外设的可变引用。这比普通的`Cell`有更多的开销,由于我们给出的是引用而不是副本,因此我们必须确保一次仅存在一个可变引用。
[`RefCell`]: https://doc.rust-lang.org/core/cell/struct.RefCell.html
[`RefCell`]:https://doc.rust-lang.org/core/cell/struct.RefCell.html
Finally, we'll also have to account for somehow moving the peripheral into
the shared variable after it has been initialised in the main code. To do
this we can use the `Option` type, initialised to `None` and later set to
the instance of the peripheral.
最后,在主代码中初始化外设后,我们还必须考虑将外设移入共享变量的方式。为此,我们可以使用`Option`类型,先将其初始化为`None` ,然后再将其设置为外设的实例。
```rust,ignore
```rust , ignore
use core::cell::RefCell;
use cortex_m::interrupt::{self, Mutex};
use stm32f4::stm32f405;
@ -468,51 +344,40 @@ fn timer() {
}
```
That's quite a lot to take in, so let's break down the important lines.
这段代码很复杂,让我们一行一行分析.
```rust,ignore
```rust , ignore
static MY_GPIO: Mutex<RefCell<Option<stm32f405::GPIOA>>> =
Mutex::new(RefCell::new(None));
```
Our shared variable is now a `Mutex` around a `RefCell` which contains an
`Option`. The `Mutex` ensures we only have access during a critical section,
and therefore makes the variable Sync, even though a plain `RefCell` would not
be Sync. The `RefCell` gives us interior mutability with references, which
we'll need to use our `GPIOA`. The `Option` lets us initialise this variable
to something empty, and only later actually move the variable in. We cannot
access the peripheral singleton statically, only at runtime, so this is
required.
现在,我们的共享变量的类型是` Mutex<RefCell<Option<stm32f405::GPIOA>>>`。 `Mutex”`可确保我们仅在临界区内具有访问权限,因此就算是`RefCell`不支持`Sync`,变量`MY_GPIO`也能够支持Sync。 `RefCell`为我们提供了带有引用的内部可变性, `Option`使我们可以先将该变量初始化为空稍后才将其实际内容移入。我们不能直接使用static的单例`GPIOA`,所有这一切都是必须的。
```rust,ignore
```rust , ignore
interrupt::free(|cs| MY_GPIO.borrow(cs).replace(Some(dp.GPIOA)));
```
Inside a critical section we can call `borrow()` on the mutex, which gives us
a reference to the `RefCell`. We then call `replace()` to move our new value
into the `RefCell`.
在临界区内,我们可以在互斥锁上调用 `borrow()`,从而获得`RefCell`的引用。然后,我们调用 `replace()`将新值移入`RefCell`。
```rust,ignore
```rust , ignore
interrupt::free(|cs| {
let gpioa = MY_GPIO.borrow(cs).borrow();
gpioa.as_ref().unwrap().odr.modify(|_, w| w.odr1().set_bit());
});
```
Finally we use `MY_GPIO` in a safe and concurrent fashion. The critical section
prevents the interrupt firing as usual, and lets us borrow the mutex. The
`RefCell` then gives us an `&Option<GPIOA>`, and tracks how long it remains
borrowed - once that reference goes out of scope, the `RefCell` will be updated
to indicate it is no longer borrowed.
终于我们可以安全并且支持并发的使用`MY_GPIO`。临界区阻止了中断的发生,并让我们借用到互斥锁。然后`RefCell`通过`as_ref()`给我们一个`&Option<&GPIOA>` ,并跟踪借用范围--一旦借用结束,`RefCell`会更新其内部的值。
Finally we use `MY_GPIO` in a safe and concurrent fashion. The critical section prevents the interrupt firing as usual, and lets us borrow the mutex. The `RefCell` then gives us an `&Option<GPIOA>`, and tracks how long it remains borrowed - once that reference goes out of scope, the `RefCell` will be updated to indicate it is no longer borrowed.
todo 感觉这段话是错的,需要验证.
Since we can't move the `GPIOA` out of the `&Option`, we need to convert it to
an `&Option<&GPIOA>` with `as_ref()`, which we can finally `unwrap()` to obtain
the `&GPIOA` which lets us modify the peripheral.
由于我们无法将`GPIOA`从`Option”`中移出,因此我们需要使用`as_ref()`将其转换为`&Option<&GPIOA>`,最后我们可以通过`unwrap()` 获得到`GPIOA`,从而可以修改外设状态。(todo 此处应该是可以访问外设)
todo &GPIOaA是只读借用啊,在怎么修改?
If we need a mutable references to shared resources, then `borrow_mut` and `deref_mut`
should be used instead. The following code shows an example using the TIM2 timer.
如果我们需要对共享资源的可变引用,则应该使用`borrow_mut` 和 `deref_mut`。以下代码显示了使用TIM2计时器的示例。
```rust,ignore
```rust , ignore
use core::cell::RefCell;
use core::ops::DerefMut;
use cortex_m::interrupt::{self, Mutex};
@ -552,72 +417,43 @@ fn timer() {
```
> **NOTE**
> **注意**
>
> At the moment, the `cortex-m` crate hides const versions of some functions
> (including `Mutex::new()`) behind the `const-fn` feature. So you need to add
> the `const-fn` feature as a dependency for cortex-m in the Cargo.toml to make
> the above examples work:
>目前, `cortex-m` crate将某些函数的const版本(包括 `Mutex::new()`)隐藏在`const-fn`特性的后面。因此您需要在Cargo.toml中将`const-fn`特性添加到cortex-m的依赖项以使上述示例起作用
>
> ``` toml
> [dependencies.cortex-m]
> version="0.6.0"
> features=["const-fn"]
> ```
> Meanwhile, `const-fn` has been working on stable Rust for some time now.
> So this additional switch in Cargo.toml will not be needed as soon as
> it is enabled in `cortex-m` by default.
>同时,`const-fn`已经在稳定版Rust上工作了一段时间。因此预计这个特性很快会成为`cortex-m`的默认配置,这样以后就不必在Cargo.toml中配置磁特性了。
>
Whew! This is safe, but it is also a little unwieldy. Is there anything else
we can do?
目前这样虽然安全,但还有点笨拙。我们还有什么可以做的吗?
## RTFM
One alternative is the [RTFM framework], short for Real Time For the Masses. It
enforces static priorities and tracks accesses to `static mut` variables
("resources") to statically ensure that shared resources are always accessed
safely, without requiring the overhead of always entering critical sections and
using reference counting (as in `RefCell`). This has a number of advantages such
as guaranteeing no deadlocks and giving extremely low time and memory overhead.
一种替代方法是[RTFM框架],RTFM的全称是Real Time For the Masses。它强制执行静态优先级并跟踪对`static mut` 变量(“资源”)的访问,以静态地确保始终安全地访问共享资源,而不需要临界区分和使用引用计数(如在“ RefCell”中)的开销。这具有许多优点,例如,确保没有死锁,并提供极低的时间和内存开销。
[RTFM framework]: https://github.com/japaric/cortex-m-rtfm
[RTFM框架]:https://github.com/rtfm-rs/cortex-m-rtfm
The framework also includes other features like message passing, which reduces
the need for explicit shared state, and the ability to schedule tasks to run at
a given time, which can be used to implement periodic tasks. Check out [the
documentation] for more information!
该框架还包括其他功能,例如消息传递,可以减少对显式共享状态的需求,还可以计划在给定时间运行的任务,可以用来执行定期任务。请查看[RTFM文档]以获取更多信息!
[the documentation]: https://japaric.github.io/cortex-m-rtfm/book/
[RTFM文档]:https://japaric.github.io/cortex-m-rtfm/book/
## Real Time Operating Systems
## 实时操作系统
Another common model for embedded concurrency is the real-time operating system
(RTOS). While currently less well explored in Rust, they are widely used in
traditional embedded development. Open source examples include [FreeRTOS] and
[ChibiOS]. These RTOSs provide support for running multiple application threads
which the CPU swaps between, either when the threads yield control (called
cooperative multitasking) or based on a regular timer or interrupts (preemptive
multitasking). The RTOS typically provide mutexes and other synchronisation
primitives, and often interoperate with hardware features such as DMA engines.
嵌入式并发的另一个常见模型是实时操作系统(RTOS)。尽管目前在Rust中的研究较少但它们已广泛用于传统的嵌入式开发中。开源的RTOS有[FreeRTOS]和[ChibiOS]。这些RTOS支持运行多个应用程序线程线程的调度触发机制包括线程主动出让控制权(称为协作多任务)和基于常规计时器或中断(称为抢占多任务)。 RTOS通常提供互斥锁和其他同步原语并且通常与DMA引擎等硬件特性进行互操作。
[FreeRTOS]: https://freertos.org/
[ChibiOS]: http://chibios.org/
[FreeRTOS]:https://freertos.org/
[ChibiOS]:http//chibios.org/
At the time of writing there are not many Rust RTOS examples to point to,
but it's an interesting area so watch this space!
在撰写本文时没有太多的Rust相关的RTOS但是这是一个有趣的领域所以请留意这个领域
## Multiple Cores
## 多核
It is becoming more common to have two or more cores in embedded processors,
which adds an extra layer of complexity to concurrency. All the examples using
a critical section (including the `cortex_m::interrupt::Mutex`) assume the only
other execution thread is the interrupt thread, but on a multi-core system
that's no longer true. Instead, we'll need synchronisation primitives designed
for multiple cores (also called SMP, for symmetric multi-processing).
在嵌入式处理器中拥有两个或多个内核变得越来越普遍,这给并发增加了额外的复杂性。所有使用临界区的示例(包括`cortex_m::interrupt::Mutex`)都假定只有中断线程,但在多核系统上不再如此。因此我们需要为多核专门设计同步原语(对于对称多处理也称为SMP)。
These typically use the atomic instructions we saw earlier, since the
processing system will ensure that atomicity is maintained over all cores.
多核系统通常使用我们之前看到的原子指令,因为处理系统将确保在所有内核上保持原子性。
Covering these topics in detail is currently beyond the scope of this book,
but the general patterns are the same as for the single-core case.
目前,详细讨论这些主题超出了本书的范围,但是一般模式与单核情况相同。

452
src/concurrency/index_en.md Normal file
View File

@ -0,0 +1,452 @@
# Concurrency
Concurrency happens whenever different parts of your program might execute at different times or out of order. In an embedded context, this includes:
* interrupt handlers, which run whenever the associated interrupt happens,
* various forms of multithreading, where your microprocessor regularly swaps between parts of your program,
* and in some systems, multiple-core microprocessors, where each core can be independently running a different part of your program at the same time.
Since many embedded programs need to deal with interrupts, concurrency will usually come up sooner or later, and it's also where many subtle and difficult bugs can occur. Luckily, Rust provides a number of abstractions and safety guarantees to help us write correct code.
## No Concurrency
The simplest concurrency for an embedded program is no concurrency: your software consists of a single main loop which just keeps running, and there are no interrupts at all. Sometimes this is perfectly suited to the problem at hand! Typically your loop will read some inputs, perform some processing, and write some outputs.
```rust , ignore
#[entry]
fn main() {
let peripherals = setup_peripherals();
loop {
let inputs = read_inputs(&peripherals);
let outputs = process(inputs);
write_outputs(&peripherals, outputs);
}
}
```
Since there's no concurrency, there's no need to worry about sharing data between parts of your program or synchronising access to peripherals. If you can get away with such a simple approach this can be a great solution.
## Global Mutable Data
Unlike non-embedded Rust, we will not usually have the luxury of creating heap allocations and passing references to that data into a newly-created thread. Instead our interrupt handlers might be called at any time and must know how to access whatever shared memory we are using. At the lowest level, this means we must have _statically allocated_ mutable memory, which both the interrupt handler and the main code can refer to.
In Rust, such [`static mut`] variables are always unsafe to read or write, because without taking special care, you might trigger a race condition, where your access to the variable is interrupted halfway through by an interrupt which also accesses that variable.
[`static mut`]: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#accessing-or-modifying-a-mutable-static-variable
For an example of how this behaviour can cause subtle errors in your code, consider an embedded program which counts rising edges of some input signal in each one-second period (a frequency counter):
```rust , ignore
static mut COUNTER: u32 = 0;
#[entry]
fn main() -> ! {
set_timer_1hz();
let mut last_state = false;
loop {
let state = read_signal_level();
if state && !last_state {
// DANGER - Not actually safe! Could cause data races.
unsafe { COUNTER += 1 };
}
last_state = state;
}
}
#[interrupt]
fn timer() {
unsafe { COUNTER = 0; }
}
```
Each second, the timer interrupt sets the counter back to 0. Meanwhile, the main loop continually measures the signal, and incremements the counter when it sees a change from low to high. We've had to use `unsafe` to access `COUNTER`, as it's `static mut`, and that means we're promising the compilerwe won't cause any undefined behaviour. Can you spot the race condition? The increment on `COUNTER` is _not_ guaranteed to be atomic — in fact, on most embedded platforms, it will be split into a load, then the increment, then a store. If the interrupt fired after the load but before the store, the reset back to 0 would be ignored after the interrupt returns — and we would count twice as many transitions for that period.
## Critical Sections
So, what can we do about data races? A simple approach is to use _critical sections_, a context where interrupts are disabled. By wrapping the access to `COUNTER` in `main` in a critical section, we can be sure the timer interrupt will not fire until we're finished incrementing `COUNTER`:
```rust , ignore
static mut COUNTER: u32 = 0;
#[entry]
fn main() -> ! {
set_timer_1hz();
let mut last_state = false;
loop {
let state = read_signal_level();
if state && !last_state {
// New critical section ensures synchronised access to COUNTER
cortex_m::interrupt::free(|_| {
unsafe { COUNTER += 1 };
});
}
last_state = state;
}
}
#[interrupt]
fn timer() {
unsafe { COUNTER = 0; }
}
```
In this example we use `cortex_m::interrupt::free`, but other platforms will have similar mechanisms for executing code in a critical section. This is also the same as disabling interrupts, running some code, and then re-enabling interrupts.
Note we didn't need to put a critical section inside the timer interrupt, for two reasons:
* Writing 0 to `COUNTER` can't be affected by a race since we don't read it
* It will never be interrupted by the `main` thread anyway
If `COUNTER` was being shared by multiple interrupt handlers that might _preempt_ each other, then each one might require a critical section as well.
This solves our immediate problem, but we're still left writing a lot of unsafe code which we need to carefully reason about, and we might be using critical sections needlessly. Since each critical section temporarily pauses interrupt processing, there is an associated cost of some extra code size and higher interrupt latency and jitter (interrupts may take longer to be processed, and the time until they are processed will be more variable). Whether this is a problem depends on your system, but in general we'd like to avoid it.
It's worth noting that while a critical section guarantees no interrupts will fire, it does not provide an exclusivity guarantee on multi-core systems! The other core could be happily accessing the same memory as your core, even without interrupts. You will need stronger synchronisation primitives if you are using multiple cores.
## Atomic Access
On some platforms, atomic instructions are available, which provide guarantees about read-modify-write operations. Specifically for Cortex-M, `thumbv6` (Cortex-M0) does not provide atomic instructions, while `thumbv7` (Cortex-M3 and above) do. These instructions give an alternative to the heavy-handed disabling of all interrupts: we can attempt the increment, it will succeed most of the time, but if it was interrupted it will automatically retry the entire increment operation. These atomic operations are safe even across multiple cores.
```rust , ignore
use core::sync::atomic::{AtomicUsize, Ordering};
static COUNTER: AtomicUsize = AtomicUsize::new(0);
#[entry]
fn main() -> ! {
set_timer_1hz();
let mut last_state = false;
loop {
let state = read_signal_level();
if state && !last_state {
// Use `fetch_add` to atomically add 1 to COUNTER
COUNTER.fetch_add(1, Ordering::Relaxed);
}
last_state = state;
}
}
#[interrupt]
fn timer() {
// Use `store` to write 0 directly to COUNTER
COUNTER.store(0, Ordering::Relaxed)
}
```
This time `COUNTER` is a safe `static` variable. Thanks to the `AtomicUsize` type `COUNTER` can be safely modified from both the interrupt handler and the main thread without disabling interrupts. When possible, this is a better solution — but it may not be supported on your platform.
A note on [`Ordering`]: this affects how the compiler and hardware may reorder instructions, and also has consequences on cache visibility. Assuming that the target is a single core platform `Relaxed` is sufficient and the most efficient choice in this particular case. Stricter ordering will cause the compiler to emit memory barriers around the atomic operations; depending on what you're using atomics for you may or may not need this! The precise details of the atomic model are complicated and best described elsewhere.
For more details on atomics and ordering, see the [nomicon].
[`Ordering`]: https://doc.rust-lang.org/core/sync/atomic/enum.Ordering.html
[nomicon]: https://doc.rust-lang.org/nomicon/atomics.html
## Abstractions, Send, and Sync
None of the above solutions are especially satisfactory. They require `unsafe` blocks which must be very carefully checked and are not ergonomic. Surely we can do better in Rust!
We can abstract our counter into a safe interface which can be safely used anywhere else in our code. For this example we'll use the critical-section counter, but you could do something very similar with atomics.
```rust , ignore
use core::cell::UnsafeCell;
use cortex_m::interrupt;
// Our counter is just a wrapper around UnsafeCell<u32>, which is the heart
// of interior mutability in Rust. By using interior mutability, we can have
// COUNTER be `static` instead of `static mut`, but still able to mutate
// its counter value.
struct CSCounter(UnsafeCell<u32>);
const CS_COUNTER_INIT: CSCounter = CSCounter(UnsafeCell::new(0));
impl CSCounter {
pub fn reset(&self, _cs: &interrupt::CriticalSection) {
// By requiring a CriticalSection be passed in, we know we must
// be operating inside a CriticalSection, and so can confidently
// use this unsafe block (required to call UnsafeCell::get).
unsafe { *self.0.get() = 0 };
}
pub fn increment(&self, _cs: &interrupt::CriticalSection) {
unsafe { *self.0.get() += 1 };
}
}
// Required to allow static CSCounter. See explanation below.
unsafe impl Sync for CSCounter {}
// COUNTER is no longer `mut` as it uses interior mutability;
// therefore it also no longer requires unsafe blocks to access.
static COUNTER: CSCounter = CS_COUNTER_INIT;
#[entry]
fn main() -> ! {
set_timer_1hz();
let mut last_state = false;
loop {
let state = read_signal_level();
if state && !last_state {
// No unsafe here!
interrupt::free(|cs| COUNTER.increment(cs));
}
last_state = state;
}
}
#[interrupt]
fn timer() {
// We do need to enter a critical section here just to obtain a valid
// cs token, even though we know no other interrupt could pre-empt
// this one.
interrupt::free(|cs| COUNTER.reset(cs));
// We could use unsafe code to generate a fake CriticalSection if we
// really wanted to, avoiding the overhead:
// let cs = unsafe { interrupt::CriticalSection::new() };
}
```
We've moved our `unsafe` code to inside our carefully-planned abstraction, and now our appplication code does not contain any `unsafe` blocks.
This design requires the application pass a `CriticalSection` token in: these tokens are only safely generated by `interrupt::free`, so by requiring one be passed in, we ensure we are operating inside a critical section, without having to actually do the lock ourselves. This guarantee is provided statically by the compiler: there won't be any runtime overhead associated with `cs`. If we had multiple counters, they could all be given the same `cs`, without requiring multiple nested critical sections.
This also brings up an important topic for concurrency in Rust: the [`Send` and `Sync`] traits. To summarise the Rust book, a type is Send when it can safely be moved to another thread, while it is Sync when it can be safely shared between multiple threads. In an embedded context, we consider interrupts to be executing in a separate thread to the application code, so variables accessed by both an interrupt and the main code must be Sync.
[`Send` and `Sync`]: https://doc.rust-lang.org/nomicon/send-and-sync.html
For most types in Rust, both of these traits are automatically derived for you by the compiler. However, because `CSCounter` contains an [`UnsafeCell`], it is not Sync, and therefore we could not make a `static CSCounter`: `static` variables _must_ be Sync, since they can be accessed by multiple threads.
[`UnsafeCell`]: https://doc.rust-lang.org/core/cell/struct.UnsafeCell.html
To tell the compiler we have taken care that the `CSCounter` is in fact safe to share between threads, we implement the Sync trait explicitly. As with the previous use of critical sections, this is only safe on single-core platforms: with multiple cores you would need to go to greater lengths to ensure safety.
## Mutexes
We've created a useful abstraction specific to our counter problem, but there are many common abstractions used for concurrency.
One such _synchronisation primitive_ is a mutex, short for mutual exclusion. These constructs ensure exclusive access to a variable, such as our counter. A thread can attempt to _lock_ (or _acquire_) the mutex, and either succeeds immediately, or blocks waiting for the lock to be acquired, or returns an error that the mutex could not be locked. While that thread holds the lock, it is granted access to the protected data. When the thread is done, it _unlocks_ (or _releases_) the mutex, allowing another thread to lock it. In Rust, we would usually implement the unlock using the [`Drop`] trait to ensure it is always released when the mutex goes out of scope.
[`Drop`]: https://doc.rust-lang.org/core/ops/trait.Drop.html
Using a mutex with interrupt handlers can be tricky: it is not normally acceptable for the interrupt handler to block, and it would be especially disastrous for it to block waiting for the main thread to release a lock, since we would then _deadlock_ (the main thread will never release the lock because execution stays in the interrupt handler). Deadlocking is not considered unsafe: it is possible even in safe Rust.
To avoid this behaviour entirely, we could implement a mutex which requires a critical section to lock, just like our counter example. So long as the critical section must last as long as the lock, we can be sure we have exclusive access to the wrapped variable without even needing to track the lock/unlock state of the mutex.
This is in fact done for us in the `cortex_m` crate! We could have written our counter using it:
```rust , ignore
use core::cell::Cell;
use cortex_m::interrupt::Mutex;
static COUNTER: Mutex<Cell<u32>> = Mutex::new(Cell::new(0));
#[entry]
fn main() -> ! {
set_timer_1hz();
let mut last_state = false;
loop {
let state = read_signal_level();
if state && !last_state {
interrupt::free(|cs|
COUNTER.borrow(cs).set(COUNTER.borrow(cs).get() + 1));
}
last_state = state;
}
}
#[interrupt]
fn timer() {
// We still need to enter a critical section here to satisfy the Mutex.
interrupt::free(|cs| COUNTER.borrow(cs).set(0));
}
```
We're now using [`Cell`], which along with its sibling `RefCell` is used to provide safe interior mutability. We've already seen `UnsafeCell` which is the bottom layer of interior mutability in Rust: it allows you to obtain multiple mutable references to its value, but only with unsafe code. A `Cell` is like an `UnsafeCell` but it provides a safe interface: it only permits taking a copy of the current value or replacing it, not taking a reference, and since it is not Sync, it cannot be shared between threads. These constraints mean it's safe to use, but we couldn't use it directly in a `static` variable as a `static` must be Sync.
[`Cell`]: https://doc.rust-lang.org/core/cell/struct.Cell.html
So why does the example above work? The `Mutex<T>` implements Sync for any `T` which is Send — such as a `Cell`. It can do this safely because it only gives access to its contents during a critical section. We're therefore able to get a safe counter with no unsafe code at all!
This is great for simple types like the `u32` of our counter, but what about more complex types which are not Copy? An extremely common example in an embedded context is a peripheral struct, which generally are not Copy. For that we can turn to `RefCell`.
## Sharing Peripherals
Device crates generated using `svd2rust` and similar abstractions provide safe access to peripherals by enforcing that only one instance of the peripheral struct can exist at a time. This ensures safety, but makes it difficult to access a peripheral from both the main thread and an interrupt handler.
To safely share peripheral access, we can use the `Mutex` we saw before. We'll also need to use [`RefCell`], which uses a runtime check to ensure only one reference to a peripheral is given out at a time. This has more overhead than the plain `Cell`, but since we are giving out references rather than copies, we must be sure only one exists at a time.
[`RefCell`]: https://doc.rust-lang.org/core/cell/struct.RefCell.html
Finally, we'll also have to account for somehow moving the peripheral into the shared variable after it has been initialised in the main code. To do this we can use the `Option` type, initialised to `None` and later set to the instance of the peripheral.
```rust , ignore
use core::cell::RefCell;
use cortex_m::interrupt::{self, Mutex};
use stm32f4::stm32f405;
static MY_GPIO: Mutex<RefCell<Option<stm32f405::GPIOA>>> =
Mutex::new(RefCell::new(None));
#[entry]
fn main() -> ! {
// Obtain the peripheral singletons and configure it.
// This example is from an svd2rust-generated crate, but
// most embedded device crates will be similar.
let dp = stm32f405::Peripherals::take().unwrap();
let gpioa = &dp.GPIOA;
// Some sort of configuration function.
// Assume it sets PA0 to an input and PA1 to an output.
configure_gpio(gpioa);
// Store the GPIOA in the mutex, moving it.
interrupt::free(|cs| MY_GPIO.borrow(cs).replace(Some(dp.GPIOA)));
// We can no longer use `gpioa` or `dp.GPIOA`, and instead have to
// access it via the mutex.
// Be careful to enable the interrupt only after setting MY_GPIO:
// otherwise the interrupt might fire while it still contains None,
// and as-written (with `unwrap()`), it would panic.
set_timer_1hz();
let mut last_state = false;
loop {
// We'll now read state as a digital input, via the mutex
let state = interrupt::free(|cs| {
let gpioa = MY_GPIO.borrow(cs).borrow();
gpioa.as_ref().unwrap().idr.read().idr0().bit_is_set()
});
if state && !last_state {
// Set PA1 high if we've seen a rising edge on PA0.
interrupt::free(|cs| {
let gpioa = MY_GPIO.borrow(cs).borrow();
gpioa.as_ref().unwrap().odr.modify(|_, w| w.odr1().set_bit());
});
}
last_state = state;
}
}
#[interrupt]
fn timer() {
// This time in the interrupt we'll just clear PA0.
interrupt::free(|cs| {
// We can use `unwrap()` because we know the interrupt wasn't enabled
// until after MY_GPIO was set; otherwise we should handle the potential
// for a None value.
let gpioa = MY_GPIO.borrow(cs).borrow();
gpioa.as_ref().unwrap().odr.modify(|_, w| w.odr1().clear_bit());
});
}
```
That's quite a lot to take in, so let's break down the important lines.
```rust , ignore
static MY_GPIO: Mutex<RefCell<Option<stm32f405::GPIOA>>> =
Mutex::new(RefCell::new(None));
```
Our shared variable is now a `Mutex` around a `RefCell` which contains an `Option`. The `Mutex` ensures we only have access during a critical section, and therefore makes the variable Sync, even though a plain `RefCell` would not be Sync. The `RefCell` gives us interior mutability with references, which we'll need to use our `GPIOA`. The `Option` lets us initialise this variable to something empty, and only later actually move the variable in. We cannot access the peripheral singleton statically, only at runtime, so this is required.
```rust , ignore
interrupt::free(|cs| MY_GPIO.borrow(cs).replace(Some(dp.GPIOA)));
```
Inside a critical section we can call `borrow()` on the mutex, which gives us a reference to the `RefCell`. We then call `replace()` to move our new value into the `RefCell`.
```rust , ignore
interrupt::free(|cs| {
let gpioa = MY_GPIO.borrow(cs).borrow();
gpioa.as_ref().unwrap().odr.modify(|_, w| w.odr1().set_bit());
});
```
Finally we use `MY_GPIO` in a safe and concurrent fashion. The critical section prevents the interrupt firing as usual, and lets us borrow the mutex. The `RefCell` then gives us an `&Option<GPIOA>`, and tracks how long it remains borrowed - once that reference goes out of scope, the `RefCell` will be updated to indicate it is no longer borrowed.
Since we can't move the `GPIOA` out of the `&Option`, we need to convert it to an `&Option<&GPIOA>` with `as_ref()`, which we can finally `unwrap()` to obtain the `&GPIOA` which lets us modify the peripheral.
If we need a mutable references to shared resources, then `borrow_mut` and `deref_mut` should be used instead. The following code shows an example using the TIM2 timer.
```rust , ignore
use core::cell::RefCell;
use core::ops::DerefMut;
use cortex_m::interrupt::{self, Mutex};
use cortex_m::asm::wfi;
use stm32f4::stm32f405;
static G_TIM: Mutex<RefCell<Option<Timer<stm32::TIM2>>>> =
Mutex::new(RefCell::new(None));
#[entry]
fn main() -> ! {
let mut cp = cm::Peripherals::take().unwrap();
let dp = stm32f405::Peripherals::take().unwrap();
// Some sort of timer configuration function.
// Assume it configures the TIM2 timer, its NVIC interrupt,
// and finally starts the timer.
let tim = configure_timer_interrupt(&mut cp, dp);
interrupt::free(|cs| {
G_TIM.borrow(cs).replace(Some(tim));
});
loop {
wfi();
}
}
#[interrupt]
fn timer() {
interrupt::free(|cs| {
if let Some(ref mut tim)) = G_TIM.borrow(cs).borrow_mut().deref_mut() {
tim.start(1.hz());
}
});
}
```
> **NOTE**
>
> At the moment, the `cortex-m` crate hides const versions of some functions (including `Mutex::new()`) behind the `const-fn` feature. So you need to add the `const-fn` feature as a dependency for cortex-m in the Cargo.toml to make the above examples work:
>
> ``` toml
> [dependencies.cortex-m]
> version="0.6.0"
> features=["const-fn"]
> ```
> Meanwhile, `const-fn` has been working on stable Rust for some time now. So this additional switch in Cargo.toml will not be needed as soon as it is enabled in `cortex-m` by default.
>
Whew! This is safe, but it is also a little unwieldy. Is there anything else we can do?
## RTFM
One alternative is the [RTFM framework], short for Real Time For the Masses. It enforces static priorities and tracks accesses to `static mut` variables ("resources") to statically ensure that shared resources are always accessed safely, without requiring the overhead of always entering critical sections and using reference counting (as in `RefCell`). This has a number of advantages such as guaranteeing no deadlocks and giving extremely low time and memory overhead.
[RTFM framework]: https://github.com/japaric/cortex-m-rtfm
The framework also includes other features like message passing, which reduces the need for explicit shared state, and the ability to schedule tasks to run at a given time, which can be used to implement periodic tasks. Check out [the documentation] for more information!
[the documentation]: https://japaric.github.io/cortex-m-rtfm/book/
## Real Time Operating Systems
Another common model for embedded concurrency is the real-time operating system (RTOS). While currently less well explored in Rust, they are widely used in traditional embedded development. Open source examples include [FreeRTOS] and [ChibiOS]. These RTOSs provide support for running multiple application threads which the CPU swaps between, either when the threads yield control (called cooperative multitasking) or based on a regular timer or interrupts (preemptive multitasking). The RTOS typically provide mutexes and other synchronisation primitives, and often interoperate with hardware features such as DMA engines.
[FreeRTOS]: https://freertos.org/
[ChibiOS]: http://chibios.org/
At the time of writing there are not many Rust RTOS examples to point to, but it's an interesting area so watch this space!
## Multiple Cores
It is becoming more common to have two or more cores in embedded processors, which adds an extra layer of complexity to concurrency. All the examples using a critical section (including the `cortex_m::interrupt::Mutex`) assume the only other execution thread is the interrupt thread, but on a multi-core system that's no longer true. Instead, we'll need synchronisation primitives designed for multiple cores (also called SMP, for symmetric multi-processing).
These typically use the atomic instructions we saw earlier, since the processing system will ensure that atomicity is maintained over all cores.
Covering these topics in detail is currently beyond the scope of this book, but the general patterns are the same as for the single-core case.

View File

@ -1,21 +1,21 @@
# A little C with your Rust
# Rust中使用C代码
Using C or C++ inside of a Rust project consists of two major parts:
在Rust项目中使用C或C++代码包含两个主要部分:
- Wrapping the exposed C API for use with Rust
- Building your C or C++ code to be integrated with the Rust code
- 封装导出的的C API以供Rust调用
- 构建要与Rust代码集成的C或C++代码
As C++ does not have a stable ABI for the Rust compiler to target, it is recommended to use the `C` ABI when combining Rust with C or C++.
由于C++没有稳定的ABI因此将Rust与C或C++结合使用时,建议使用`C` ABI。
## Defining the interface
## 定义接口
Before consuming C or C++ code from Rust, it is necessary to define (in Rust) what data types and function signatures exist in the linked code. In C or C++, you would include a header (`.h` or `.hpp`) file which defines this data. In Rust, it is necessary to either manually translate these definitions to Rust, or use a tool to generate these definitions.
在Rust中使用C或C++代码之前,有必要定义(用Rust编写)这些代码中存在哪些数据类型和函数。在C或C++中使用这些代码时,您需要包含定义相关的头文件(“.h”或“.hpp”)。在Rust中需要将这些头文件手动转换为Rust代码或使用工具生成。
First, we will cover manually translating these definitions from C/C++ to Rust.
首先我们将介绍如何将这些代码从C/C++手动转换为Rust。
### Wrapping C functions and Datatypes
### 封装C函数和数据类型
Typically, libraries written in C or C++ will provide a header file defining all types and functions used in public interfaces. An example file may look like this:
通常用C或C++编写的库将提供头文件,该头文件定义公共接口中使用的所有类型和函数。比如下面的例子:
```C
/* File: cool.h */
@ -27,9 +27,9 @@ typedef struct CoolStruct {
void cool_function(int i, char c, CoolStruct* cs);
```
When translated to Rust, this interface would look as such:
转换为Rust后代码如下所示
```rust,ignore
```rust , ignore
/* File: cool_bindings.rs */
#[repr(C)]
pub struct CoolStruct {
@ -44,83 +44,86 @@ pub extern "C" fn cool_function(
);
```
Let's take a look at this definition one piece at a time, to explain each of the parts.
让我们一次查看一个定义,以解释每个部分。
```rust,ignore
```rust , ignore
#[repr(C)]
pub struct CoolStruct { ... }
```
By default, Rust does not guarantee order, padding, or the size of data included in a `struct`. In order to guarantee compatibility with C code, we include the `#[repr(C)]` attribute, which instructs the Rust compiler to always use the same rules C does for organizing data within a struct.
```rust,ignore
默认情况下Rust不保证`struct`中包含的数据的顺序填充或大小。为了保证与C代码的兼容性我们加入了`#[repr(C)]` 属性该属性指示Rust编译器使用C规则来组织结构体中的数据。
```rust , ignore
pub x: cty::c_int,
pub y: cty::c_int,
```
Due to the flexibility of how C or C++ defines an `int` or `char`, it is recommended to use primitive data types defined in `cty`, which will map types from C to types in Rust
由于C/C++中`int`和`char`类型的灵活性,建议使用`cty`中定义的原始数据类型它将原始类型从C映射到Rust中的类型。
```rust,ignore
```rust , ignore
pub extern "C" fn cool_function( ... );
```
This statement defines the signature of a function that uses the C ABI, called `cool_function`. By defining the signature without defining the body of the function, the definition of this function will need to be provided elsewhere, or linked into the final library or binary from a static library.
该语句定义使用C ABI的函数的签名称为“ cool_function”。这里只定义了签名需要在其他位置提供此函数的定义或者将其链接到相关的动态或者库文件中。
```rust,ignore
```rust , ignore
i: cty::c_int,
c: cty::c_char,
cs: *mut CoolStruct
```
Similar to our datatype above, we define the datatypes of the function arguments using C-compatible definitions. We also retain the same argument names, for clarity.
与上面的数据类型类似我们使用C兼容的定义来定义函数参数的数据类型。为了清楚起见我们还保留相同的参数名称。
We have one new type here, `*mut CoolStruct`. As C does not have a concept of Rust's references, which would look like this: `&mut CoolStruct`, we instead have a raw pointer. As dereferencing this pointer is `unsafe`, and the pointer may in fact be a `null` pointer, care must be taken to ensure the guarantees typical of Rust when interacting with C or C++ code.
我们这里有一种新类型,即`*mut CoolStruct`。由于C没有Rust引用的概念`mut CoolStruct`因此我们有一个裸指针。由于解引用此指针是“不安全的”并且实际上该指针可能是“空”指针因此在与C或C++代码进行交互时必须小心确保Rust的典型保证。
### Automatically generating the interface
### 自动生成接口
Rather than manually generating these interfaces, which may be tedious and error prone, there is a tool called [bindgen] which will perform these conversions automatically. For instructions of the usage of [bindgen], please refer to the [bindgen user's manual], however the typical process consists of the following:
相比手动生成这些接口(可能很乏味且容易出错),可以使用一种名为[bindgen]的工具来自动执行这些转换。有关[bindgen]用法的说明,请参阅[bindgen用户手册],但是典型过程包括以下内容:
1. Gather all C or C++ headers defining interfaces or datatypes you would like to use with Rust
2. Write a `bindings.h` file, which `#include "..."`'s each of the files you gathered in step one
3. Feed this `bindings.h` file, along with any compilation flags used to compile
your code into `bindgen`. Tip: use `Builder.ctypes_prefix("cty")` /
`--ctypes-prefix=cty` and `Builder.use_core()` to make the generated code `#![no_std]` compatible.
4. `bindgen` will produce the generated Rust code to the output of the terminal window. This file may be piped to a file in your project, such as `bindings.rs`. You may use this file in your Rust project to interact with C/C++ code compiled and linked as an external library. Tip: don't forget to use the [`cty`](https://crates.io/crates/cty) crate if your types in the generated bindings are prefixed with `cty`.
1. 收集所有要在Rust中使用的接口或数据类型的C或C++头文件
2. 编写一个“bindings.h”文件其中的“ #include“ ...”`是您在第一步中收集的每个文件。
3. 将此“bindings.h”文件以及用于编译的所有编译标志提供给`bindgen`。注意使用``Builder.ctypes_prefix("cty")` /
`--ctypes-prefix=cty`和`Builder.use_core()` ,这样生成的代码才能和`#![no_std]` 兼容。
4. `bindgen`将生成的Rust代码生成输出到终端。该输出可以通过管道重定向到文件例如“ bindings.rs”。您可以在Rust项目中使用此文件与作为外部库编译和链接的C/C ++代码进行交互。提示:如果生成的绑定中的类型以`cty`作为前缀,请不要忘记使用[`cty`](https://crates.io/crates/cty)crate。
[bindgen]: https://github.com/rust-lang/rust-bindgen
[bindgen user's manual]: https://rust-lang.github.io/rust-bindgen/
[bindgen用户手册]: https://rust-lang.github.io/rust-bindgen/
## Building your C/C++ code
## 构建C / C ++代码
As the Rust compiler does not directly know how to compile C or C++ code (or code from any other language, which presents a C interface), it is necessary to compile your non-Rust code ahead of time.
由于Rust编译器不知道如何编译C或C++代码(或来自任何其他语言的代码只要提供C接口即可)因此有必要提前编译非Rust代码。
For embedded projects, this most commonly means compiling the C/C++ code to a static archive (such as `cool-library.a`), which can then be combined with your Rust code at the final linking step.
对于嵌入式项目这通常意味着将C/C ++代码编译为静态归档文件(例如“cool-library.a”)然后可以在最后的链接步骤将其与Rust代码合并。
If the library you would like to use is already distributed as a static archive, it is not necessary to rebuild your code. Just convert the provided interface header file as described above, and include the static archive at compile/link time.
如果您要使用的库已经作为静态库分发,则无需重新构建代码。只需像上面提到的转换接口文件,并在编译/链接时包含静态库文件。
If your code exists as a source project, it will be necessary to compile your C/C++ code to a static library, either by triggering your existing build system (such as `make`, `CMake`, etc.), or by porting the necessary compilation steps to use a tool called the `cc` crate. For both of these steps, it is necessary to use a `build.rs` script.
如果您依赖的代码以源代码形式提供,则必须先用现有的构建系统(例如“ make”“ CMake”等)编译,或者移植编译过程使用`cc` crate进行编译。对于这两种情况都需要使用一个build.rs脚本。
### Rust `build.rs` build scripts
### Rust`build.rs`构建脚本
A `build.rs` script is a file written in Rust syntax, that is executed on your compilation machine, AFTER dependencies of your project have been built, but BEFORE your project is built.
`build.rs`脚本是用Rust语法编写的文件该文件在编译机上执行在构建完项目本身的依赖项之后但在构建项目本身之前。
The full reference may be found [here](https://doc.rust-lang.org/cargo/reference/build-scripts.html). `build.rs` scripts are useful for generating code (such as via [bindgen]), calling out to external build systems such as `Make`, or directly compiling C/C++ through use of the `cc` crate
完整的参考资料可以在[这里](https://doc.rust-lang.org/cargo/reference/build-scripts.html)中找到。 `build.rs`脚本对于生成代码(例如通过[bindgen]),调用外部构建系统(例如Make)或通过使用`cc` crate直接编译C/C ++非常有用。
### Triggering external build systems
### 调用外部构建系统
For projects with complex external projects or build systems, it may be easiest to use [`std::process::Command`] to "shell out" to your other build systems by traversing relative paths, calling a fixed command (such as `make library`), and then copying the resulting static library to the proper location in the `target` build directory.
对于复杂的项目,最简单的方法是使用[`std::process::Command`]遍历相对路径,调用固定命令(例如`make library`,然后将生成的静态库复制到`target`目录中的正确位置。
While your crate may be targeting a `no_std` embedded platform, your `build.rs` executes only on machines compiling your crate. This means you may use any Rust crates which will run on your compilation host.
虽然你自己的项目以`no_std`嵌入式平台为目标,但是`build.rs`仅在执行编译的计算机上执行。这意味着您可以在`build.rs`中使用编译主机上的任何Rust crate。
### Building C/C++ code with the `cc` crate
### 使用`cc` crate构建C/C ++代码
For projects with limited dependencies or complexity, or for projects where it is difficult to modify the build system to produce a static library (rather than a final binary or executable), it may be easier to instead utilize the [`cc` crate], which provides an idiomatic Rust interface to the compiler provided by the host.
对于不太复杂或者依赖较少的项目,或者难以修改构建系统以生成静态库(而不是最终的二进制文件或可执行文件)的项目,使用[`cc` crate]可能会更容易它为主机提供的编译器封装了惯用的Rust接口。
[`cc` crate]: https://github.com/alexcrichton/cc-rs
[`cc` crate]:https://github.com/alexcrichton/cc-rs
In the simplest case of compiling a single C file as a dependency to a static library, an example `build.rs` script using the [`cc` crate] would look like this:
```rust,ignore
对于只有一个c文件的静态库的最简单情况,下面给出一个使用[`cc` crate]的示例:
```rust , ignore
extern crate cc;
fn main() {
@ -128,4 +131,4 @@ fn main() {
.file("foo.c")
.compile("libfoo.a");
}
```
```

View File

@ -0,0 +1,131 @@
# A little C with your Rust
Using C or C++ inside of a Rust project consists of two major parts:
- Wrapping the exposed C API for use with Rust
- Building your C or C++ code to be integrated with the Rust code
As C++ does not have a stable ABI for the Rust compiler to target, it is recommended to use the `C` ABI when combining Rust with C or C++.
## Defining the interface
Before consuming C or C++ code from Rust, it is necessary to define (in Rust) what data types and function signatures exist in the linked code. In C or C++, you would include a header (`.h` or `.hpp`) file which defines this data. In Rust, it is necessary to either manually translate these definitions to Rust, or use a tool to generate these definitions.
First, we will cover manually translating these definitions from C/C++ to Rust.
### Wrapping C functions and Datatypes
Typically, libraries written in C or C++ will provide a header file defining all types and functions used in public interfaces. An example file may look like this:
```C
/* File: cool.h */
typedef struct CoolStruct {
int x;
int y;
} CoolStruct;
void cool_function(int i, char c, CoolStruct* cs);
```
When translated to Rust, this interface would look as such:
```rust , ignore
/* File: cool_bindings.rs */
#[repr(C)]
pub struct CoolStruct {
pub x: cty::c_int,
pub y: cty::c_int,
}
pub extern "C" fn cool_function(
i: cty::c_int,
c: cty::c_char,
cs: *mut CoolStruct
);
```
Let's take a look at this definition one piece at a time, to explain each of the parts.
```rust , ignore
#[repr(C)]
pub struct CoolStruct { ... }
```
By default, Rust does not guarantee order, padding, or the size of data included in a `struct`. In order to guarantee compatibility with C code, we include the `#[repr(C)]` attribute, which instructs the Rust compiler to always use the same rules C does for organizing data within a struct.
```rust , ignore
pub x: cty::c_int,
pub y: cty::c_int,
```
Due to the flexibility of how C or C++ defines an `int` or `char`, it is recommended to use primitive data types defined in `cty`, which will map types from C to types in Rust
```rust , ignore
pub extern "C" fn cool_function( ... );
```
This statement defines the signature of a function that uses the C ABI, called `cool_function`. By defining the signature without defining the body of the function, the definition of this function will need to be provided elsewhere, or linked into the final library or binary from a static library.
```rust , ignore
i: cty::c_int,
c: cty::c_char,
cs: *mut CoolStruct
```
Similar to our datatype above, we define the datatypes of the function arguments using C-compatible definitions. We also retain the same argument names, for clarity.
We have one new type here, `*mut CoolStruct`. As C does not have a concept of Rust's references, which would look like this: `&mut CoolStruct`, we instead have a raw pointer. As dereferencing this pointer is `unsafe`, and the pointer may in fact be a `null` pointer, care must be taken to ensure the guarantees typical of Rust when interacting with C or C++ code.
### Automatically generating the interface
Rather than manually generating these interfaces, which may be tedious and error prone, there is a tool called [bindgen] which will perform these conversions automatically. For instructions of the usage of [bindgen], please refer to the [bindgen user's manual], however the typical process consists of the following:
1. Gather all C or C++ headers defining interfaces or datatypes you would like to use with Rust
2. Write a `bindings.h` file, which `#include "..."`'s each of the files you gathered in step one
3. Feed this `bindings.h` file, along with any compilation flags used to compile
your code into `bindgen`. Tip: use `Builder.ctypes_prefix("cty")` /
`--ctypes-prefix=cty` and `Builder.use_core()` to make the generated code `#![no_std]` compatible.
4. `bindgen` will produce the generated Rust code to the output of the terminal window. This file may be piped to a file in your project, such as `bindings.rs`. You may use this file in your Rust project to interact with C/C++ code compiled and linked as an external library. Tip: don't forget to use the [`cty`](https://crates.io/crates/cty) crate if your types in the generated bindings are prefixed with `cty`.
[bindgen]: https://github.com/rust-lang/rust-bindgen
[bindgen user's manual]: https://rust-lang.github.io/rust-bindgen/
## Building your C/C++ code
As the Rust compiler does not directly know how to compile C or C++ code (or code from any other language, which presents a C interface), it is necessary to compile your non-Rust code ahead of time.
For embedded projects, this most commonly means compiling the C/C++ code to a static archive (such as `cool-library.a`), which can then be combined with your Rust code at the final linking step.
If the library you would like to use is already distributed as a static archive, it is not necessary to rebuild your code. Just convert the provided interface header file as described above, and include the static archive at compile/link time.
If your code exists as a source project, it will be necessary to compile your C/C++ code to a static library, either by triggering your existing build system (such as `make`, `CMake`, etc.), or by porting the necessary compilation steps to use a tool called the `cc` crate. For both of these steps, it is necessary to use a `build.rs` script.
### Rust `build.rs` build scripts
A `build.rs` script is a file written in Rust syntax, that is executed on your compilation machine, AFTER dependencies of your project have been built, but BEFORE your project is built.
The full reference may be found [here](https://doc.rust-lang.org/cargo/reference/build-scripts.html). `build.rs` scripts are useful for generating code (such as via [bindgen]), calling out to external build systems such as `Make`, or directly compiling C/C++ through use of the `cc` crate
### Triggering external build systems
For projects with complex external projects or build systems, it may be easiest to use [`std::process::Command`] to "shell out" to your other build systems by traversing relative paths, calling a fixed command (such as `make library`), and then copying the resulting static library to the proper location in the `target` build directory.
While your crate may be targeting a `no_std` embedded platform, your `build.rs` executes only on machines compiling your crate. This means you may use any Rust crates which will run on your compilation host.
### Building C/C++ code with the `cc` crate
For projects with limited dependencies or complexity, or for projects where it is difficult to modify the build system to produce a static library (rather than a final binary or executable), it may be easier to instead utilize the [`cc` crate], which provides an idiomatic Rust interface to the compiler provided by the host.
[`cc` crate]: https://github.com/alexcrichton/cc-rs
In the simplest case of compiling a single C file as a dependency to a static library, an example `build.rs` script using the [`cc` crate] would look like this:
```rust , ignore
extern crate cc;
fn main() {
cc::Build::new()
.file("foo.c")
.compile("libfoo.a");
}
```

View File

@ -1,63 +1,47 @@
# Interoperability
# 互操作性
Interoperability between Rust and C code is always dependent
on transforming data between the two languages.
For this purposes there are two dedicated modules
in the `stdlib` called
[`std::ffi`](https://doc.rust-lang.org/std/ffi/index.html) and
[`std::os::raw`](https://doc.rust-lang.org/std/os/raw/index.html).
Rust与C代码之间的互操作性始终取决于两种语言之间的数据转换。为此在`stdlib`中有两个专用模块称为[`std::ffi`](https://doc.rust-lang.org/std/ffi/index.html)和[`std::os::raw`](https://doc.rust-lang.org/std/os/raw/index.html)。
`std::os::raw` deals with low-level primitive types that can
be converted implicitly by the compiler
because the memory layout between Rust and C
is similar enough or the same.
`std::os::raw`处理可以由编译器隐式转换的低级基本类型因为Rust和C之间的内存布局足够相似或相同。
`std::ffi` provides some utility for converting more complex
types such as Strings, mapping both `&str` and `String`
to C-types that are easier and safer to handle.
`std::ffi` 提供了一些实用程序,用于转换更复杂的类型(例如字符串),将`&str`和`String`都映射到更易于处理和更安全的C类型。
Neither of these modules are available in `core`, but you can find a `#![no_std]`
compatible version of `std::ffi::{CStr,CString}` in the [`cstr_core`] crate, and
most of the `std::os::raw` types in the [`cty`] crate.
这两个模块都不在`core`中,但您可以在[`cstr_core`]crate中找到支持`#![no_std]`的`std::ffi::{CStr,CString}``std::os::raw`中的大多数类型可以在[`cty`] crate中找到。
[`cstr_core`]: https://crates.io/crates/cstr_core
[`cty`]: https://crates.io/crates/cty
[`cstr_core`]:https://crates.io/crates/cstr_core
[`cty`]:https://crates.io/crates/cty
| Rust type | Intermediate | C type |
|------------|--------------|--------------|
|Rust类型| 中间类型|C类型 |
| ------------ | -------------- | -------------- |
| String | CString | *char |
| &str | CStr | *const char |
| () | c_void | void |
| u32 or u64 | c_uint | unsigned int |
| etc | ... | ... |
As mentioned above, primitive types can be converted
by the compiler implicitly.
```rust,ignore
如上所述,基本类型可以由编译器隐式转换。
```rust , ignore
unsafe fn foo(num: u32) {
let c_num: c_uint = num;
let r_num: u32 = c_num;
}
```
## Interoperability with other build systems
## 与其他构建系统的互操作性
A common requirement for including Rust in your embedded project is combining
Cargo with your existing build system, such as make or cmake.
嵌入式项目中经常会碰到需要Cargo与现有的构建系统(例如make或cmake)结合的情况。
We are collecting examples and use cases for this on our issue tracker in
[issue #61].
[问题# 61]上有我们收集的示例。
[issue #61]: https://github.com/rust-embedded/book/issues/61
[问题# 61]:https//github.com/rust-embedded/book/issues/61
## Interoperability with RTOSs
## 与RTOS的互操作性
Integrating Rust with an RTOS such as FreeRTOS or ChibiOS is still a work in
progress; especially calling RTOS functions from Rust can be tricky.
将Rust与FreeRTOS或ChibiOS等RTOS集成仍在进行中。特别是从Rust调用RTOS函数可能很棘手。
We are collecting examples and use cases for this on our issue tracker in
[issue #62].
[问题# 62]上有我们收集的示例。
[issue #62]: https://github.com/rust-embedded/book/issues/62
[问题# 62]:https//github.com/rust-embedded/book/issues/62

View File

@ -0,0 +1,48 @@
# Interoperability
Interoperability between Rust and C code is always dependent on transforming data between the two languages. For this purposes there are two dedicated modules in the `stdlib` called
[`std::ffi`](https://doc.rust-lang.org/std/ffi/index.html) and
[`std::os::raw`](https://doc.rust-lang.org/std/os/raw/index.html).
`std::os::raw` deals with low-level primitive types that can be converted implicitly by the compiler because the memory layout between Rust and C is similar enough or the same.
`std::ffi` provides some utility for converting more complex types such as Strings, mapping both `&str` and `String` to C-types that are easier and safer to handle.
Neither of these modules are available in `core`, but you can find a `#![no_std]` compatible version of `std::ffi::{CStr,CString}` in the [`cstr_core`] crate, and most of the `std::os::raw` types in the [`cty`] crate.
[`cstr_core`]: https://crates.io/crates/cstr_core
[`cty`]: https://crates.io/crates/cty
| Rust type | Intermediate | C type |
|------------|--------------|--------------|
| String | CString | *char |
| &str | CStr | *const char |
| () | c_void | void |
| u32 or u64 | c_uint | unsigned int |
| etc | ... | ... |
As mentioned above, primitive types can be converted by the compiler implicitly.
```rust , ignore
unsafe fn foo(num: u32) {
let c_num: c_uint = num;
let r_num: u32 = c_num;
}
```
## Interoperability with other build systems
A common requirement for including Rust in your embedded project is combining Cargo with your existing build system, such as make or cmake.
We are collecting examples and use cases for this on our issue tracker in [issue #61].
[issue #61]: https://github.com/rust-embedded/book/issues/61
## Interoperability with RTOSs
Integrating Rust with an RTOS such as FreeRTOS or ChibiOS is still a work in progress; especially calling RTOS functions from Rust can be tricky.
We are collecting examples and use cases for this on our issue tracker in [issue #62].
[issue #62]: https://github.com/rust-embedded/book/issues/62

View File

@ -1,22 +1,17 @@
# A little Rust with your C
# C中使用Rust代码
Using Rust code inside a C or C++ project mostly consists of two parts.
在C或C++项目中使用Rust代码主要包括两部分。
- Creating a C-friendly API in Rust
- Embedding your Rust project into an external build system
- 在Rust中创建C友好的API
- 将Rust项目嵌入到外部构建系统中
Apart from `cargo` and `meson`, most build systems don't have native Rust support.
So you're most likely best off just using `cargo` for compiling your crate and
any dependencies.
除了`cargo`和`meson`外大多数构建系统没有本地Rust支持。因此最有可能最好是使用`cargo`来编译crate和所有依赖项。
## Setting up a project
## 建立一个项目
Create a new `cargo` project as usual.
照常创建一个新的`cargo`项目。
There are flags to tell `cargo` to emit a systems library, instead of
its regular rust target.
This also allows you to set a different output name for your library,
if you want it to differ from the rest of your crate.
通能参数可以告诉`cargo`生成一个系统库crate而不是常规的Rust项目。您也可以为库设置不同的输出名称。
```toml
[lib]
@ -25,79 +20,61 @@ crate-type = ["cdylib"] # Creates dynamic lib
# crate-type = ["staticlib"] # Creates static lib
```
## Building a `C` API
## 构建`C` API
Because C++ has no stable ABI for the Rust compiler to target, we use `C` for
any interoperability between different languages. This is no exception when using Rust
inside of C and C++ code.
由于C++没有稳定的ABI因此我们将`C`用于不同语言之间的任何互操作。在C和C++代码中使用Rust时也不例外。
### `#[no_mangle]`
### `# [no_mangle]`
The Rust compiler mangles symbol names differently than native code linkers expect.
As such, any function that Rust exports to be used outside of Rust needs to be told
not to be mangled by the compiler.
Rust编译器处理符号名称的方式与c语言链接器期望的方式不同。因此需要告知Rust编译器不要对要在Rust之外使用的任何函数进行改动。
### `extern "C"`
### `extern“ C”`
By default, any function you write in Rust will use the
Rust ABI (which is also not stabilized).
Instead, when building outwards facing FFI APIs we need to
tell the compiler to use the system ABI.
默认情况下您在Rust中编写的任何函数都将使用Rust ABI(它也是不稳定的)。而当构建FFI API时我们需要告诉编译器使用系统ABI。
Depending on your platform, you might want to target a specific ABI version, which are
documented [here](https://doc.rust-lang.org/reference/items/external-blocks.html).
根据您的平台您可能要特定的ABI版本这些在[此处](https://doc.rust-lang.org/reference/items/external-blocks.html)中进行了说明。
---
Putting these parts together, you get a function that looks roughly like this.
将刚刚的内容总结在一起,您将获得一个大致如下所示的函数。
```rust,ignore
```rust , ignore
#[no_mangle]
pub extern "C" fn rust_function() {
}
```
Just as when using `C` code in your Rust project you now need to transform data
from and to a form that the rest of the application will understand.
就像在Rust项目中使用C代码一样您现在需要将数据转换成其他应用程序可以理解的格式。
## Linking and greater project context.
## 链接和更大的项目上下文。
So then, that's one half of the problem solved.
How do you use this now?
因此,现在只是解决了问题的一半。您现在如何使用它?
**This very much depends on your project and/or build system**
**这在很大程度上取决于您的项目和/或构建系统**
`cargo` will create a `my_lib.so`/`my_lib.dll` or `my_lib.a` file,
depending on your platform and settings. This library can simply be linked
by your build system.
`cargo`将根据您的平台和设置创建一个`my_lib.so`/`my_lib.dll` / `my_lib.a` 文件。该库可以直接由您的构建系统链接。
However, calling a Rust function from C requires a header file to declare
the function signatures.
但是从C调用Rust函数需要一个头文件来声明函数签名。
Every function in your Rust-ffi API needs to have a corresponding header function.
Rust-ffi API中的每个函数都需要具有相应的函数声明。
```rust,ignore
```rust , ignore
#[no_mangle]
pub extern "C" fn rust_function() {}
```
would then become
需要这样一个声明:
```C
void rust_function();
```
有一个工具可以自动执行此过程,称为[cbindgen]它可以分析Rust代码然后从中生成C和C++项目的头文件。
etc.
[cbindgen]:https://github.com/eqrion/cbindgen
There is a tool to automate this process,
called [cbindgen] which analyses your Rust code
and then generates headers for your C and C++ projects from it.
至此在C语言中调用Rust函数只需添加头文件并调用它们
[cbindgen]: https://github.com/eqrion/cbindgen
At this point, using the Rust functions from C
is as simple as including the header and calling them!
```C
#include "my-rust-project.h"

View File

@ -0,0 +1,84 @@
# A little Rust with your C
Using Rust code inside a C or C++ project mostly consists of two parts.
- Creating a C-friendly API in Rust
- Embedding your Rust project into an external build system
Apart from `cargo` and `meson`, most build systems don't have native Rust support. So you're most likely best off just using `cargo` for compiling your crate and any dependencies.
## Setting up a project
Create a new `cargo` project as usual.
There are flags to tell `cargo` to emit a systems library, instead of its regular rust target. This also allows you to set a different output name for your library, if you want it to differ from the rest of your crate.
```toml
[lib]
name = "your_crate"
crate-type = ["cdylib"] # Creates dynamic lib
# crate-type = ["staticlib"] # Creates static lib
```
## Building a `C` API
Because C++ has no stable ABI for the Rust compiler to target, we use `C` for any interoperability between different languages. This is no exception when using Rust inside of C and C++ code.
### `#[no_mangle]`
The Rust compiler mangles symbol names differently than native code linkers expect. As such, any function that Rust exports to be used outside of Rust needs to be told not to be mangled by the compiler.
### `extern "C"`
By default, any function you write in Rust will use the Rust ABI (which is also not stabilized). Instead, when building outwards facing FFI APIs we need to tell the compiler to use the system ABI.
Depending on your platform, you might want to target a specific ABI version, which are documented [here](https://doc.rust-lang.org/reference/items/external-blocks.html).
---
Putting these parts together, you get a function that looks roughly like this.
```rust , ignore
#[no_mangle]
pub extern "C" fn rust_function() {
}
```
Just as when using `C` code in your Rust project you now need to transform data from and to a form that the rest of the application will understand.
## Linking and greater project context.
So then, that's one half of the problem solved. How do you use this now?
**This very much depends on your project and/or build system**
`cargo` will create a `my_lib.so`/`my_lib.dll` or `my_lib.a` file, depending on your platform and settings. This library can simply be linked by your build system.
However, calling a Rust function from C requires a header file to declare the function signatures.
Every function in your Rust-ffi API needs to have a corresponding header function.
```rust , ignore
#[no_mangle]
pub extern "C" fn rust_function() {}
```
would then become
```C
void rust_function();
```
etc.
There is a tool to automate this process, called [cbindgen] which analyses your Rust code and then generates headers for your C and C++ projects from it.
[cbindgen]: https://github.com/eqrion/cbindgen
At this point, using the Rust functions from C is as simple as including the header and calling them!
```C
#include "my-rust-project.h"
rust_function();
```

View File

@ -1,39 +1,32 @@
# Meet Your Hardware
# 认识您的硬件
Let's get familiar with the hardware we'll be working with.
让我们熟首先了解一下我们将要使用的硬件。
## STM32F3DISCOVERY (the "F3")
## STM32F3DISCOVERY(以下简称"F3")
<p align="center">
<img title="F3" src="../assets/f3.jpg">
<p align ="center">
<img title ="F3" src ="../assets/f3.jpg">
</p>
What does this board contain?
该板包含什么?
- A [STM32F303VCT6](https://www.st.com/en/microcontrollers/stm32f303vc.html) microcontroller. This microcontroller has
- A single-core ARM Cortex-M4F processor with hardware support for single-precision floating point
operations and a maximum clock frequency of 72 MHz.
- [STM32F303VCT6](https://www.st.com/en/microcontrollers/stm32f303vc.html)微控制器。该微控制器具有
- 支持单精度浮点数的单核ARM Cortex-M4F处理器最大时钟频率为72 MHz。
- 256KiB的Flash。 (1 KiB = 1024字节)
- 48KiB的RAM。
- 各种集成外设例如定时器I2CSPI和USART。
- 通用输入输出(GPIO)和其他类型的引脚,可通过板侧的两排引脚访问。
- 一个USB接口 标有"USB USER"的USB端口。
- [加速度计](https://en.wikipedia.org/wiki/Accelerometer)(作为[LSM303DLHC](https://www.st.com/en/mems-and-sensors/lsm303dlhc.html)的一部分)。
- 256 KiB of "Flash" memory. (1 KiB = 10**24** bytes)
- [磁力仪](https://en.wikipedia.org/wiki/Magnetometer)(作为[LSM303DLHC](https://www.st.com/en/mems-and-sensors/lsm303dlhc.html)的一部分)。
- 48 KiB of RAM.
- [陀螺仪](https://en.wikipedia.org/wiki/Gyroscope) (作为[L3GD20](https://www.pololu.com/file/0J563/L3GD20.pdf)芯片的一部分)。
- A variety of integrated peripherals such as timers, I2C, SPI and USART.
- 8个LED(以指南针样式排列)
- General purpose Input Output (GPIO) and other types of pins accessible through the two rows of headers along side the board.
- A USB interface accessible through the USB port labeled "USB USER".
- 第二个微处理器:[STM32F103](https://www.st.com/en/microcontrollers/stm32f103cb.html)。该微控制器实际上是板载编程器/调试器的一部分,连接到名为"USB ST-LINK"的USB端口。
- An [accelerometer](https://en.wikipedia.org/wiki/Accelerometer) as part of the [LSM303DLHC](https://www.st.com/en/mems-and-sensors/lsm303dlhc.html) chip.
有关该板子的功能和更多规格的详细列表,请访问[STMicroelectronics](https://www.st.com/en/evaluation-tools/stm32f3discovery.html)。
- A [magnetometer](https://en.wikipedia.org/wiki/Magnetometer) as part of the [LSM303DLHC](https://www.st.com/en/mems-and-sensors/lsm303dlhc.html) chip.
- A [gyroscope](https://en.wikipedia.org/wiki/Gyroscope) as part of the [L3GD20](https://www.pololu.com/file/0J563/L3GD20.pdf) chip.
- 8 user LEDs arranged in the shape of a compass.
- A second microcontroller: a [STM32F103](https://www.st.com/en/microcontrollers/stm32f103cb.html). This microcontroller is actually part of an on-board programmer / debugger and is connected to the USB port named "USB ST-LINK".
For a more detailed list of features and further specifications of the board take a look at the [STMicroelectronics](https://www.st.com/en/evaluation-tools/stm32f3discovery.html) website.
A word of caution: be careful if you want to apply external signals to the board. The microcontroller STM32F303VCT6 pins take a nominal voltage of 3.3 volts. For further information consult the [6.2 Absolute maximum ratings section in the manual](https://www.st.com/resource/en/datasheet/stm32f303vc.pdf)
**特别注意**如果要将外部信号施加到板上请小心。微控制器STM32F303VCT6引脚的标称电压为3.3伏。有关更多信息,请参阅[手册中的6.2章节 绝对最大额定值部分](https://www.st.com/resource/zh/datasheet/stm32f303vc.pdf)

39
src/intro/hardware_en.md Normal file
View File

@ -0,0 +1,39 @@
# Meet Your Hardware
Let's get familiar with the hardware we'll be working with.
## STM32F3DISCOVERY (the "F3")
<p align="center">
<img title="F3" src="../assets/f3.jpg">
</p>
What does this board contain?
- A [STM32F303VCT6](https://www.st.com/en/microcontrollers/stm32f303vc.html) microcontroller. This microcontroller has
- A single-core ARM Cortex-M4F processor with hardware support for single-precision floating point
operations and a maximum clock frequency of 72 MHz.
- 256 KiB of "Flash" memory. (1 KiB = 10**24** bytes)
- 48 KiB of RAM.
- A variety of integrated peripherals such as timers, I2C, SPI and USART.
- General purpose Input Output (GPIO) and other types of pins accessible through the two rows of headers along side the board.
- A USB interface accessible through the USB port labeled "USB USER".
- An [accelerometer](https://en.wikipedia.org/wiki/Accelerometer) as part of the [LSM303DLHC](https://www.st.com/en/mems-and-sensors/lsm303dlhc.html) chip.
- A [magnetometer](https://en.wikipedia.org/wiki/Magnetometer) as part of the [LSM303DLHC](https://www.st.com/en/mems-and-sensors/lsm303dlhc.html) chip.
- A [gyroscope](https://en.wikipedia.org/wiki/Gyroscope) as part of the [L3GD20](https://www.pololu.com/file/0J563/L3GD20.pdf) chip.
- 8 user LEDs arranged in the shape of a compass.
- A second microcontroller: a [STM32F103](https://www.st.com/en/microcontrollers/stm32f103cb.html). This microcontroller is actually part of an on-board programmer / debugger and is connected to the USB port named "USB ST-LINK".
For a more detailed list of features and further specifications of the board take a look at the [STMicroelectronics](https://www.st.com/en/evaluation-tools/stm32f3discovery.html) website.
A word of caution: be careful if you want to apply external signals to the board. The microcontroller STM32F303VCT6 pins take a nominal voltage of 3.3 volts. For further information consult the [6.2 Absolute maximum ratings section in the manual](https://www.st.com/resource/en/datasheet/stm32f303vc.pdf)

View File

@ -1,119 +1,88 @@
# Introduction
# 介绍
Welcome to The Embedded Rust Book: An introductory book about using the Rust
Programming Language on "Bare Metal" embedded systems, such as Microcontrollers.
欢迎阅读《嵌入式Rust手册》: 一本介绍使用Rust在
“裸机”嵌入式系统(例如微控制器)上编程的入门书籍。
## Who Embedded Rust is For
Embedded Rust is for everyone who wants to do embedded programming while taking advantage of the higher-level concepts and safety guarantees the Rust language provides.
(See also [Who Rust Is For](https://doc.rust-lang.org/book/ch00-00-introduction.html))
## 本书的潜在读者
## Scope
本书适用于希望使用Rust提供的高级概念和安全性的嵌入式开发工程师。(另请参见[Rust的目标对象](https://doc.rust-lang.org/book/ch00-00-introduction.html))
The goals of this book are:
## 范围
* Get developers up to speed with embedded Rust development. i.e. How to set
up a development environment.
本书的目标是:
* Share *current* best practices about using Rust for embedded development. i.e.
How to best use Rust language features to write more correct embedded
software.
* 使开发人员与嵌入式Rust开发同步。即如何建立开发环境。
* Serve as a cookbook in some cases. e.g. How do I do mix C and Rust in a single
project?
* 分享 *当前* 关于使用Rust进行嵌入式开发的最佳实践。即
  如何最好地使用Rust语言特性来编写更正确的嵌入式系统。
This book tries to be as general as possible but to make things easier for both
the readers and the writers it uses the ARM Cortex-M architecture in all its
examples. However, the book doesn't assume that the reader is familiar with this
particular architecture and explains details particular to this architecture
where required.
* 也可以作为手册。例如如何在同一个项目中混合使用C和Rust
## Who This Book is For
This book caters towards people with either some embedded background or some Rust background, however we believe
everybody curious about embedded Rust programming can get something out of this book. For those without any prior knowledge
we suggest you read the "Assumptions and Prerequisites" section and catch up on missing knowledge to get more out of the book
and improve your reading experience. You can check out the "Other Resources" section to find resources on topics
you might want to catch up on.
本书试图尽可能地涵盖更多议题,但是为了既降低对读者也降低对作者的要求,本书所有的例子都针对Cortex-M架构的ARM处理器。 但是,本书并不假定读者对此处理架构非常熟悉,因此会在需要的地方解释该架构的特定细节。
### Assumptions and Prerequisites
## 这本书适合谁
本书面向的是具有嵌入式背景或熟悉Rust语言的人但是我们相信每个对嵌入式Rust编程感兴趣的人都可以从本书中学到一些东西。对于那些没有任何先验知识的人我们建议您阅读[假设和先决条件](#假设和先决条件)部分,并补上缺少的知识。您可以查看[其他资源](#其他资源)部分以找到有关主题的资源。
* You are comfortable using the Rust Programming Language, and have written,
run, and debugged Rust applications on a desktop environment. You should also
be familiar with the idioms of the [2018 edition] as this book targets
Rust 2018.
### 假设和先决条件
[2018 edition]: https://doc.rust-lang.org/edition-guide/
* 您很习惯使用Rust编程语言 在桌面环境上编写,运行和调试过Rust应用程序。你也应该熟悉本书针对的[2018版](https://doc.rust-lang.org/edition-guide/)语法。
* 您可以轻松地在使用至少一种语言开发和调试嵌入式系统例如CC++或Ada并且熟悉以下概念:
* 交叉编译
* 内存映射外设
* 中断
* 通用接口例如I2CSPI串行等。
* You are comfortable developing and debugging embedded systems in another
language such as C, C++, or Ada, and are familiar with concepts such as:
* Cross Compilation
* Memory Mapped Peripherals
* Interrupts
* Common interfaces such as I2C, SPI, Serial, etc.
### 其他资源
如果您不熟悉上述任何内容,或者想要了解有关本书中提到的特定主题的更多信息,下面的资源可能会有所帮助。
### Other Resources
If you are unfamiliar with anything mentioned above or if you want more information about a specific topic mentioned in this book you might find some of these resources helpful.
|主题|资源|描述
| -------------- | ---------- | ------------- |
|Rust| [Rust Book](https://doc.rust-lang.org/book/)|如果您对Rust尚不熟悉我们强烈建议您阅读本书。 |
|Rust,嵌入式| [Discovery Book](https://docs.rust-embedded.org/discovery/)|如果您从未做过任何嵌入式编程,那么本书可能是一个更好的开始|
|Rust,嵌入式| [嵌入式Rust书架](https://docs.rust-embedded.org)|在这里您可以找到Rust嵌入式工作组提供的其他一些资源。 |
|Rust,嵌入式| [Embedonomicon](https://docs.rust-embedded.org/embedonomicon/)|用Rust进行嵌入式编程细节非常棒。 |
|Rust,嵌入式| [嵌入式常见问题解答](https://docs.rust-embedded.org/faq.html)|关于嵌入式Rust的常见问题。 |
|中断| [中断](https://en.wikipedia.org/wiki/Interrupt)| -|
|内存映射的IO外设| [内存映射的I/O](https://en.wikipedia.org/wiki/Memory-mapped_I/O)| -|
| SPIUARTRS232USBI2CTTL | [有关SPIUART和其他接口的堆栈交换](https://electronics.stackexchange.com/questions/37814/usart-uart-rs232-usb-spi-i2c-ttl-etc-what-are-all-of-these-and-how-do-th)| -|
| Topic | Resource | Description |
|--------------|----------|-------------|
| Rust | [Rust Book](https://doc.rust-lang.org/book/) | If you are not yet comfortable with Rust, we highly suggest reading this book. |
| Rust, Embedded | [Discovery Book](https://docs.rust-embedded.org/discovery/) | If you have never done any embedded programming, this book might be a better start |
| Rust, Embedded | [Embedded Rust Bookshelf](https://docs.rust-embedded.org) | Here you can find several other resources provided by Rust's Embedded Working Group. |
| Rust, Embedded | [Embedonomicon](https://docs.rust-embedded.org/embedonomicon/) | The nitty gritty details when doing embedded programming in Rust. |
| Rust, Embedded | [embedded FAQ](https://docs.rust-embedded.org/faq.html) | Frequently asked questions about Rust in an embedded context. |
| Interrupts | [Interrupt](https://en.wikipedia.org/wiki/Interrupt) | - |
| Memory-mapped IO/Peripherals | [Memory-mapped I/O](https://en.wikipedia.org/wiki/Memory-mapped_I/O) | - |
| SPI, UART, RS232, USB, I2C, TTL | [Stack Exchange about SPI, UART, and other interfaces](https://electronics.stackexchange.com/questions/37814/usart-uart-rs232-usb-spi-i2c-ttl-etc-what-are-all-of-these-and-how-do-th) | - |
## 如何使用这本书
## How to Use This Book
本书通常假定您会从头到尾阅读它。后面的章节会构建在前面各章的基础上,前面的章节可能会在一个主题上点到即止,而后面的章节则会重新深入讨论该主题。
This book generally assumes that youre reading it front-to-back. Later
chapters build on concepts in earlier chapters, and earlier chapters may
not dig into details on a topic, revisiting the topic in a later chapter.
本书大多数示例都基于[STM32F3DISCOVERY](http://taobao.com)开发板。这个板子基于ARM Cortex-M架构. 此架构的大多数CPU的最基本功能都是相同的但是外设和实现细节则随供应商不同而不同甚至同一供应商的不同系列微处理器家族之间也不尽相同。
This book will be using the [STM32F3DISCOVERY] development board from
STMicroelectronics for the majority of the examples contained within. This board
is based on the ARM Cortex-M architecture, and while basic functionality is
the same across most CPUs based on this architecture, peripherals and other
implementation details of Microcontrollers are different between different
vendors, and often even different between Microcontroller families from the same
vendor.
因此,为了遵循本书中的示例,我们建议购买[STM32F3DISCOVERY](http://www.st.com/en/evaluation-tools/stm32f3discovery.html)开发板
For this reason, we suggest purchasing the [STM32F3DISCOVERY] development board
for the purpose of following the examples in this book.
[STM32F3DISCOVERY]: http://www.st.com/en/evaluation-tools/stm32f3discovery.html
## 改进本书
## Contributing to This Book
本书的工作在[此存储库](https://github.com/rust-embedded/book)中,主要是由[Rust资源团队](https://github.com/rust-embedded/wg#the-resources-team)开发。
The work on this book is coordinated in [this repository] and is mainly
developed by the [resources team].
如果您在阅读本书时遇到困难,或者发现一些本书的部分内容不够清晰或难以理解,可以在本书的[问题跟踪](https://github.com/rust-embedded/book/issues/)中进行报告。
[this repository]: https://github.com/rust-embedded/book
[resources team]: https://github.com/rust-embedded/wg#the-resources-team
欢迎针对本书提供任何但不限于有关拼写和新内容的PR.
If you have trouble following the instructions in this book or find that some
section of the book is not clear enough or hard to follow then that's a bug and
it should be reported in [the issue tracker] of this book.
## 重复使用此材料
[the issue tracker]: https://github.com/rust-embedded/book/issues/
本书遵循以下许可:
Pull requests fixing typos and adding new content are very welcome!
* 本书中包含的示例代码和独立的Cargo项目均遵循[MIT许可]和[Apache许可v2.0]的条款。
* 本书中包含的书面散文,图片和图表均遵循[CC-BY-SA v4.0]许可条款。
## Re-using this material
This book is distributed under the following licenses:
* The code samples and free-standing Cargo projects contained within this book are licensed under the terms of both the [MIT License] and the [Apache License v2.0].
* The written prose, pictures and diagrams contained within this book are licensed under the terms of the Creative Commons [CC-BY-SA v4.0] license.
[MIT License]: https://opensource.org/licenses/MIT
[Apache License v2.0]: http://www.apache.org/licenses/LICENSE-2.0
[MIT许可证]: https://opensource.org/licenses/MIT
[Apache许可证v2.0]:http://www.apache.org/licenses/LICENSE-2.0
[CC-BY-SA v4.0]: https://creativecommons.org/licenses/by-sa/4.0/legalcode
TL;DR: If you want to use our text or images in your work, you need to:
TL; DR: 如果您想在工作中使用我们的文字或图片,则需要:
* Give the appropriate credit (i.e. mention this book on your slide, and provide a link to the relevant page)
* Provide a link to the [CC-BY-SA v4.0] licence
* Indicate if you have changed the material in any way, and make any changes to our material available under the same licence
* 给予适当的感谢(即在幻灯片上提及此书,并提供指向相关页面的链接)
* 提供[CC-BY-SA v4.0]许可证的链接
* 指出您是否以任何方式更改了材料,并根据相同的许可对我们的材料进行了任何更改
Also, please do let us know if you find this book useful!
另外,如果您觉得这本书有用,请告诉我们!

88
src/intro/index_en.md Normal file
View File

@ -0,0 +1,88 @@
# 介绍
欢迎阅读《嵌入式Rust手册》: 一本介绍使用Rust在
“裸机”嵌入式系统(例如微控制器)上编程的入门书籍。
## 本书的潜在读者
本书适用于希望使用Rust提供的高级概念和安全性的嵌入式开发工程师。(另请参见[Rust的目标对象](https://doc.rust-lang.org/book/ch00-00-introduction.html))
## 范围
本书的目标是:
* 使开发人员与嵌入式Rust开发同步。即如何建立开发环境。
* 分享 *当前* 关于使用Rust进行嵌入式开发的最佳实践。即
  如何最好地使用Rust语言特性来编写更正确的嵌入式系统。
* 也可以作为手册。例如如何在同一个项目中混合使用C和Rust
本书试图尽可能地涵盖更多议题,但是为了既降低对读者也降低对作者的要求,本书所有的例子都针对Cortex-M架构的ARM处理器。 但是,本书并不假定读者对此处理架构非常熟悉,因此会在需要的地方解释该架构的特定细节。
## 这本书适合谁
本书面向的是具有某些嵌入式背景或熟悉Rust语言的人但是我们相信每个对嵌入式Rust编程感兴趣的人都可以从本书中学到一些东西。对于那些没有任何先验知识的人我们建议您阅读[假设和先决条件](#假设和先决条件)部分,并补上缺少的知识以从书中获取更多信息并改善您的阅读体验。您可以查看[其他资源](#其他资源)部分以找到有关主题的资源。
### 假设和先决条件
* 您很习惯使用Rust编程语言 在桌面环境上编写,运行和调试过Rust应用程序。你也应该熟悉本书针对的[2018版](https://doc.rust-lang.org/edition-guide/)语法。
* 您可以轻松地在使用其他语言开发和调试嵌入式系统语言例如CC++或Ada并且熟悉以下概念:
* 交叉编译
* 内存映射外设
* 中断
* 通用接口例如I2CSPI串行等。
### 其他资源
如果您不熟悉上述任何内容,或者想要了解有关本书中提到的特定主题的更多信息,下面的资源可能会有所帮助。
|主题|资源|描述
| -------------- | ---------- | ------------- |
|Rust| [Rust Book](https://doc.rust-lang.org/book/)|如果您对Rust尚不熟悉我们强烈建议您阅读本书。 |
|Rust,嵌入式| [Discovery Book](https://docs.rust-embedded.org/discovery/)|如果您从未做过任何嵌入式编程,那么本书可能是一个更好的开始|
|Rust,嵌入式| [嵌入式Rust书架](https://docs.rust-embedded.org)|在这里您可以找到Rust嵌入式工作组提供的其他一些资源。 |
|Rust,嵌入式| [Embedonomicon](https://docs.rust-embedded.org/embedonomicon/)|用Rust进行嵌入式编程细节非常棒。 |
|Rust,嵌入式| [嵌入式常见问题解答](https://docs.rust-embedded.org/faq.html)|关于嵌入式Rust的常见问题。 |
|中断| [中断](https://en.wikipedia.org/wiki/Interrupt)| -|
|内存映射的IO外设| [内存映射的I/O](https://en.wikipedia.org/wiki/Memory-mapped_I/O)| -|
| SPIUARTRS232USBI2CTTL | [有关SPIUART和其他接口的堆栈交换](https://electronics.stackexchange.com/questions/37814/usart-uart-rs232-usb-spi-i2c-ttl-etc-what-are-all-of-these-and-how-do-th)| -|
## 如何使用这本书
本书通常假定您会从头到尾阅读它。后面的章节会构建在前面各章的基础上,前面的章节可能会在一个主题上点到即止,而后面的章节则会重新深入讨论该主题。
本书大多数示例都基于[STM32F3DISCOVERY](http://taobao.com)开发板。这个板子基于ARM Cortex-M架构其中基本功能
在基于此架构的大多数CPU上都是相同的但是外设和实现细节不同
供应商则不尽相同,甚至同一供应商的不同系列微处理器家族之间也不尽相同。
因此,为了遵循本书中的示例,我们建议购买[STM32F3DISCOVERY](http://www.st.com/en/evaluation-tools/stm32f3discovery.html)开发板
## 改进本书
本书的工作在[此存储库](https://github.com/rust-embedded/book)中,主要是
由[Rust资源团队](https://github.com/rust-embedded/wg#the-resources-team)开发。
如果您在遵循本书中的说明时遇到困难,或者发现一些
本书的部分内容不够清晰或难以理解,可以在本书的[问题跟踪器](https://github.com/rust-embedded/book/issues/)中进行报告。
欢迎针对本书提供任何但不限于有关拼写和新内容的PR.
## 重复使用此材料
本书遵循以下许可:
* 本书中包含的示例代码和独立的Cargo项目均遵循[MIT许可](https://opensource.org/licenses/MIT)和[Apache许可v2.0](http://www.apache.org/licenses/LICENSE-2.0)的条款。
* 本书中包含的书面散文,图片和图表均遵循[CC-BY-SA v4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)许可条款。
TL; DR: 如果您想在工作中使用我们的文字或图片,则需要:
* 给予适当的感谢(即在幻灯片上提及此书,并提供指向相关页面的链接)
* 提供[CC-BY-SA v4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)许可证的链接
* 指出您是否以任何方式更改了材料,并根据相同的许可对我们的材料进行了任何更改
另外,如果您觉得这本书有用,请告诉我们!

View File

@ -1,64 +1,62 @@
# Installing the tools
# 安装工具
This page contains OS-agnostic installation instructions for a few of the tools:
### Rust Toolchain
### Rust工具链
Install rustup by following the instructions at [https://rustup.rs](https://rustup.rs).
按照[rustup](https://rustup.rs)上的说明安装rustup。
**NOTE** Make sure you have a compiler version equal to or newer than `1.31`. `rustc
-V` should return a date newer than the one shown below.
**注意**确保您使用的编译器版本等于或高于1.31。 `rustc
-V`应该返回比下面示例中更新的一个日期。
``` console
```sh
$ rustc -V
rustc 1.31.1 (b6c32da9b 2018-12-18)
rustc 1.31.1(b6c32da9b 2018-12-18)
```
For bandwidth and disk usage concerns the default installation only supports
native compilation. To add cross compilation support for the ARM Cortex-M
architectures choose one of the following compilation targets. For the STM32F3DISCOVERY
board used for the examples in this book, use the final `thumbv7em-none-eabihf` target.
Cortex-M0, M0+, and M1 (ARMv6-M architecture):
``` console
rustup的默认安装仅支持本机编译,因此需要添加对ARM Cortex-M的交叉编译支持。对于STM32F3DISCOVERY
这个本书示例使用的开发板请使用target "thumbv7em-none-eabihf"。
针对Cortex-M0M0+和M1(ARMv6-M架构)
```sh
$ rustup target add thumbv6m-none-eabi
```
Cortex-M3 (ARMv7-M architecture):
``` console
针对Cortex-M3(ARMv7-M架构)
```sh
$ rustup target add thumbv7m-none-eabi
```
Cortex-M4 and M7 without hardware floating point (ARMv7E-M architecture):
``` console
针对没有硬浮点的Cortex-M4和M7(ARMv7E-M架构)
```sh
$ rustup target add thumbv7em-none-eabi
```
Cortex-M4F and M7F with hardware floating point (ARMv7E-M architecture):
``` console
$ rustup target add thumbv7em-none-eabihf
针对具有硬浮点的Cortex-M4F和M7F(ARMv7E-M架构)
```sh
$ rustup target add thumbv7em-none-eabihf
```
### `cargo-binutils`
``` console
```sh
$ cargo install cargo-binutils
$ rustup component add llvm-tools-preview
$ rustup component add llvm-tools-preview
```
### `cargo-generate`
We'll use this later to generate a project from a template.
稍后我们将使用它从模板生成项目。
``` console
```sh
$ cargo install cargo-generate
```
### OS-Specific Instructions
### 操作系统相关的安装说明
Now follow the instructions specific to the OS you are using:
现在,按照您所使用的操作系统的特定说明进行操作:
- [Linux](install/linux.md)
- [Windows](install/windows.md)
- [macOS](install/macos.md)
- [macOS](install/macos.md)

View File

@ -1,15 +1,15 @@
# Linux
Here are the installation commands for a few Linux distributions.
这是一些Linux发行版的安装命令。
## Packages
## 安装包
- Ubuntu 18.04 or newer / Debian stretch or newer
- Ubuntu 18.04或更高版本/Debian Stretch或更高版本
> **NOTE** `gdb-multiarch` is the GDB command you'll use to debug your ARM
> Cortex-M programs
> **注意**`gdb-multiarch`是用于调试ARM的GDB命令
> Cortex-M程序
<!-- Debian stretch -->
<!-- Debian stretch -->
<!-- GDB 7.12 -->
<!-- OpenOCD 0.9.0 -->
<!-- QEMU 2.8.1 -->
@ -19,52 +19,52 @@ Here are the installation commands for a few Linux distributions.
<!-- OpenOCD 0.10.0 -->
<!-- QEMU 2.11.1 -->
``` console
```sh
sudo apt install gdb-multiarch openocd qemu-system-arm
```
- Ubuntu 14.04 and 16.04
- Ubuntu 14.0416.04
> **NOTE** `arm-none-eabi-gdb` is the GDB command you'll use to debug your ARM
> Cortex-M programs
> **注意**`arm-none-eabi-gdb`是用于调试ARM的GDB命令
> Cortex-M程序
<!-- Ubuntu 14.04 -->
<!-- GDB 7.6 (!) -->
<!-- OpenOCD 0.7.0 (?) -->
<!-- QEMU 2.0.0 (?) -->
<-Ubuntu 14.04->
<-GDB 7.6()->
<-OpenOCD 0.7.0()->
<-QEMU 2.0.0()->
``` console
```sh
sudo apt install gdb-arm-none-eabi openocd qemu-system-arm
```
- Fedora 27 or newer
- Fedora 27或更高版本
> **NOTE** `arm-none-eabi-gdb` is the GDB command you'll use to debug your ARM
> Cortex-M programs
> **注意**`arm-none-eabi-gdb`是用于调试ARM的GDB命令
> Cortex-M程序
<!-- Fedora 27 -->
<!-- GDB 7.6 (!) -->
<!-- OpenOCD 0.10.0 -->
<!-- QEMU 2.10.2 -->
``` console
```sh
sudo dnf install arm-none-eabi-gdb openocd qemu-system-arm
```
- Arch Linux
> **NOTE** `arm-none-eabi-gdb` is the GDB command you'll use to debug ARM
> Cortex-M programs
> **注意**`arm-none-eabi-gdb`是用于调试ARM的GDB命令
> Cortex-M程序
``` console
sudo pacman -S arm-none-eabi-gdb qemu-arch-extra openocd
```
## udev rules
## udev规则
This rule lets you use OpenOCD with the Discovery board without root privilege.
该规则使您可以在没有root特权的情况下将OpenOCD与Discovery开发板一起使用。
Create the file `/etc/udev/rules.d/70-st-link.rules` with the contents shown below.
创建文件`/etc/udev/rules.d/70-st-link.rules`,内容如下所示。
``` text
# STM32F3DISCOVERY rev A/B - ST-LINK/V2
@ -74,21 +74,22 @@ ATTRS{idVendor}=="0483", ATTRS{idProduct}=="3748", TAG+="uaccess"
ATTRS{idVendor}=="0483", ATTRS{idProduct}=="374b", TAG+="uaccess"
```
Then reload all the udev rules with:
然后使用以下命令重新加载所有udev规则
``` console
sudo udevadm control --reload-rules
```
If you had the board plugged to your laptop, unplug it and then plug it again.
如果您将主板插入笔记本电脑,请先拔下电源,然后再重新插入。
You can check the permissions by running this command:
您可以通过运行以下命令来检查权限:
``` console
```sh
lsusb
```
Which should show something like
应该显示类似结果:
```text
(..)
@ -96,8 +97,7 @@ Bus 001 Device 018: ID 0483:374b STMicroelectronics ST-LINK/V2.1
(..)
```
Take note of the bus and device numbers. Use those numbers to create a path like
`/dev/bus/usb/<bus>/<device>`. Then use this path like so:
记下总线和设备号,然后像这样使用路径`/dev/bus/usb/<bus>/<device>`
``` console
ls -l /dev/bus/usb/001/018
@ -116,10 +116,8 @@ user::rw-
user:you:rw-
```
The `+` appended to permissions indicates the existence of an extended
permission. The `getfacl` command tells the user `you` can make use of
this device.
权限后面的“ +”表示存在扩展权限。 “getfacl”命令告诉用户“您”可以使用此设备。
Now, go to the [next section].
现在,转到[下一部分]。
[next section]: verify.md
[下一部分]:verify.md

View File

@ -0,0 +1,123 @@
# Linux
Here are the installation commands for a few Linux distributions.
## Packages
- Ubuntu 18.04 or newer / Debian stretch or newer
> **NOTE** `gdb-multiarch` is the GDB command you'll use to debug your ARM
> Cortex-M programs
<!-- Debian stretch -->
<!-- GDB 7.12 -->
<!-- OpenOCD 0.9.0 -->
<!-- QEMU 2.8.1 -->
<!-- Ubuntu 18.04 -->
<!-- GDB 8.1 -->
<!-- OpenOCD 0.10.0 -->
<!-- QEMU 2.11.1 -->
``` console
sudo apt install gdb-multiarch openocd qemu-system-arm
```
- Ubuntu 14.04 and 16.04
> **NOTE** `arm-none-eabi-gdb` is the GDB command you'll use to debug your ARM
> Cortex-M programs
<!-- Ubuntu 14.04 -->
<!-- GDB 7.6 (!) -->
<!-- OpenOCD 0.7.0 (?) -->
<!-- QEMU 2.0.0 (?) -->
``` console
sudo apt install gdb-arm-none-eabi openocd qemu-system-arm
```
- Fedora 27 or newer
> **NOTE** `arm-none-eabi-gdb` is the GDB command you'll use to debug your ARM
> Cortex-M programs
<!-- Fedora 27 -->
<!-- GDB 7.6 (!) -->
<!-- OpenOCD 0.10.0 -->
<!-- QEMU 2.10.2 -->
``` console
sudo dnf install arm-none-eabi-gdb openocd qemu-system-arm
```
- Arch Linux
> **NOTE** `arm-none-eabi-gdb` is the GDB command you'll use to debug ARM
> Cortex-M programs
``` console
sudo pacman -S arm-none-eabi-gdb qemu-arch-extra openocd
```
## udev rules
This rule lets you use OpenOCD with the Discovery board without root privilege.
Create the file `/etc/udev/rules.d/70-st-link.rules` with the contents shown below.
``` text
# STM32F3DISCOVERY rev A/B - ST-LINK/V2
ATTRS{idVendor}=="0483", ATTRS{idProduct}=="3748", TAG+="uaccess"
# STM32F3DISCOVERY rev C+ - ST-LINK/V2-1
ATTRS{idVendor}=="0483", ATTRS{idProduct}=="374b", TAG+="uaccess"
```
Then reload all the udev rules with:
``` console
sudo udevadm control --reload-rules
```
If you had the board plugged to your laptop, unplug it and then plug it again.
You can check the permissions by running this command:
``` console
lsusb
```
Which should show something like
```text
(..)
Bus 001 Device 018: ID 0483:374b STMicroelectronics ST-LINK/V2.1
(..)
```
Take note of the bus and device numbers. Use those numbers to create a path like
`/dev/bus/usb/<bus>/<device>`. Then use this path like so:
``` console
ls -l /dev/bus/usb/001/018
```
```text
crw-------+ 1 root root 189, 17 Sep 13 12:34 /dev/bus/usb/001/018
```
```console
getfacl /dev/bus/usb/001/018 | grep user
```
```text
user::rw-
user:you:rw-
```
The `+` appended to permissions indicates the existence of an extended permission. The `getfacl` command tells the user `you` can make use of this device.
Now, go to the [next section].
[next section]: verify.md

View File

@ -1,8 +1,8 @@
# macOS
# 苹果系统
All the tools can be install using [Homebrew]:
可以使用[Homebrew]安装所有工具:
[Homebrew]: http://brew.sh/
[Homebrew]:http//brew.sh/
``` console
$ # GDB
@ -15,6 +15,6 @@ $ # QEMU
$ brew install qemu
```
That's all! Go to the [next section].
就这样!转到[下一部分]。
[next section]: verify.md
[下一部分]:verify.md

View File

@ -0,0 +1,20 @@
# macOS
All the tools can be install using [Homebrew]:
[Homebrew]: http://brew.sh/
``` console
$ # GDB
$ brew install armmbed/formulae/arm-none-eabi-gcc
$ # OpenOCD
$ brew install openocd
$ # QEMU
$ brew install qemu
```
That's all! Go to the [next section].
[next section]: verify.md

View File

@ -1,26 +1,25 @@
# Verify Installation
# 验证安装
In this section we check that some of the required tools / drivers have been
correctly installed and configured.
在本节中,我们检查是否已正确安装和配置了一些必需的工具/驱动程序。
Connect your laptop / PC to the discovery board using a micro USB cable. The
discovery board has two USB connectors; use the one labeled "USB ST-LINK" that
sits on the center of the edge of the board.
使用微型USB电缆将开发板连接到笔记本电脑/PC。开发板有两个USB接口。请使用位于板边缘中央的标有“USB ST-LINK”的USB接口。
Also check that the ST-LINK header is populated. See the picture below; the
ST-LINK header is circled in red.
还要检查是否已拔掉ST-LINK跳线。见下图 ST-LINK标头用红色圈出。
<p align="center">
<img title="Connected discovery board" src="../../assets/verify.jpeg">
</p>
Now run the following command:
现在运行以下命令:
``` console
$ openocd -f interface/stlink-v2-1.cfg -f target/stm32f3x.cfg
```
You should get the following output and the program should block the console:
您应该获得以下输出,并且该程序应阻塞控制台:
``` text
Open On-Chip Debugger 0.10.0
@ -41,30 +40,22 @@ Info : Target voltage: 2.919881
Info : stm32f3x.cpu: hardware has 6 breakpoints, 4 watchpoints
```
The contents may not match exactly but you should get the last line about
breakpoints and watchpoints. If you got it then terminate the OpenOCD process
and move to the [next section].
内容可能不完全匹配但是您应该看到有关断点和观察点的最后一行。如果看到了则终止OpenOCD进程并移至[下一部分]。
[next section]: ../../start/index.md
[下一部分]: ../../start/index.md
If you didn't get the "breakpoints" line then try the following command.
如果没有得到“断点”行,请尝试以下命令。
``` console
$ openocd -f interface/stlink-v2.cfg -f target/stm32f3x.cfg
```
If that command works that means you got an old hardware revision of the
discovery board. That won't be a problem but commit that fact to memory as
you'll need to configure things a bit differently later on. You can move to the
[next section].
如果该命令有效,则说明您的开发板的硬件版本较旧。这虽然不是一个问题,但是请记住你稍后需要对配置做一些修改。现在您可以转到[下一部分]。
If neither command worked as a normal user then try to run them with root
permission (e.g. `sudo openocd ..`). If the commands do work with root
permission then check that the [udev rules] have been correctly set.
如果这两个命令都不能作为普通用户使用请尝试以root权限运行它们例如`sudo openocd ..`)。如果这时可以正常工作,则请检查[udev规则]是否已正确设置。
[udev rules]: linux.md#udev-rules
[udev规则]:linux.md# udev-rules
If you have reached this point and OpenOCD is not working please open [an issue]
and we'll help you out!
如果您到了这一步OpenOCD无法正常工作请提交一个[问题],我们将为您提供帮助!
[an issue]: https://github.com/rust-embedded/book/issues
[问题]:https://github.com/rust-embedded/book/issues

View File

@ -0,0 +1,62 @@
# Verify Installation
In this section we check that some of the required tools / drivers have been correctly installed and configured.
Connect your laptop / PC to the discovery board using a micro USB cable. The discovery board has two USB connectors; use the one labeled "USB ST-LINK" that sits on the center of the edge of the board.
Also check that the ST-LINK header is populated. See the picture below; the ST-LINK header is circled in red.
<p align="center">
<img title="Connected discovery board" src="../../assets/verify.jpeg">
</p>
Now run the following command:
``` console
$ openocd -f interface/stlink-v2-1.cfg -f target/stm32f3x.cfg
```
You should get the following output and the program should block the console:
``` text
Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v27 API v2 SWIM v15 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 2.919881
Info : stm32f3x.cpu: hardware has 6 breakpoints, 4 watchpoints
```
The contents may not match exactly but you should get the last line about breakpoints and watchpoints. If you got it then terminate the OpenOCD process and move to the [next section].
[next section]: ../../start/index.md
If you didn't get the "breakpoints" line then try the following command.
``` console
$ openocd -f interface/stlink-v2.cfg -f target/stm32f3x.cfg
```
If that command works that means you got an old hardware revision of the discovery board.
That won't be a problem but commit that fact to memory as you'll need to configure things a bit differently later on. You can move to the [next section].
X
If neither command worked as a normal user then try to run them with root permission (e.g. `sudo openocd ..`). If the commands do work with root permission then check that the [udev rules] have been correctly set.
[udev rules]: linux.md#udev-rules
If you have reached this point and OpenOCD is not working please open [an issue] and we'll help you out!
[an issue]: https://github.com/rust-embedded/book/issues

View File

@ -2,9 +2,7 @@
## `arm-none-eabi-gdb`
ARM provides `.exe` installers for Windows. Grab one from [here][gcc], and follow the instructions.
Just before the installation process finishes tick/select the "Add path to environment variable"
option. Then verify that the tools are in your `%PATH%`:
ARM为Windows提供exe安装程序。从[gcc]这里下载然后按照说明进行操作。在安装过程即将完成之前勾选“Add path to environment variable”选项。然后验证工具是否在您的“PATH”中
``` console
$ arm-none-eabi-gdb -v
@ -12,19 +10,15 @@ GNU gdb (GNU Tools for Arm Embedded Processors 7-2018-q2-update) 8.1.0.20180315-
(..)
```
[gcc]: https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads
[gcc]:https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads
## OpenOCD
There's no official binary release of OpenOCD for Windows but there are unofficial releases
available [here][openocd]. Grab the 0.10.x zipfile and extract it somewhere on your drive (I
recommend `C:\OpenOCD` but with the drive letter that makes sense to you) then update your `%PATH%`
environment variable to include the following path: `C:\OpenOCD\bin` (or the path that you used
before).
没有适用于Windows的OpenOCD官方二进制发行版但是[这里](https://github.com/gnu-mcu-eclipse/openocd/releases)有非官方发行版。下载0.10.x zip文件并将其解压缩到驱动器上的某个位置(我建议使用C:\OpenOCD),然后更新`PATH`环境变量,使其包含以下路径:`C:\OpenOCD\bin`(或之前选择的路径)。
[openocd]: https://github.com/gnu-mcu-eclipse/openocd/releases
使用以下命令验证OpenOCD是否在您的“PATH”中
Verify that OpenOCD is in your `%PATH%` with:
``` console
$ openocd -v
@ -32,19 +26,17 @@ Open On-Chip Debugger 0.10.0
(..)
```
## QEMU
Grab QEMU from [the official website][qemu].
从[官方网站](https://www.qemu.org/download/#windows)下载QEMU。
## ST-LINK USB驱动程序
[qemu]: https://www.qemu.org/download/#windows
您还需要安装[此USB驱动程序],否则OpenOCD无法正常工作。按照安装程序的说明进行操作并确保您安装了正确版本的驱动程序(32位或64位)。
## ST-LINK USB driver
[此USB驱动程序]:http://www.st.com/en/embedded-software/stsw-link009.html
You'll also need to install [this USB driver] or OpenOCD won't work. Follow the installer
instructions and make sure you install the right version (32-bit or 64-bit) of the driver.
就这样!转到[下一部分]。
[this USB driver]: http://www.st.com/en/embedded-software/stsw-link009.html
That's all! Go to the [next section].
[next section]: verify.md
[下一部分]:verify.md

View File

@ -0,0 +1,43 @@
# Windows
## `arm-none-eabi-gdb`
ARM provides `.exe` installers for Windows. Grab one from [here][gcc], and follow the instructions. Just before the installation process finishes tick/select the "Add path to environment variable" option. Then verify that the tools are in your `%PATH%`:
``` console
$ arm-none-eabi-gdb -v
GNU gdb (GNU Tools for Arm Embedded Processors 7-2018-q2-update) 8.1.0.20180315-git
(..)
```
[gcc]: https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads
## OpenOCD
There's no official binary release of OpenOCD for Windows but there are unofficial releases available [here][openocd]. Grab the 0.10.x zipfile and extract it somewhere on your drive (I recommend `C:\OpenOCD` but with the drive letter that makes sense to you) then update your `%PATH%` environment variable to include the following path: `C:\OpenOCD\bin` (or the path that you used before).
[openocd]: https://github.com/gnu-mcu-eclipse/openocd/releases
Verify that OpenOCD is in your `%PATH%` with:
``` console
$ openocd -v
Open On-Chip Debugger 0.10.0
(..)
```
## QEMU
Grab QEMU from [the official website][qemu].
[qemu]: https://www.qemu.org/download/#windows
## ST-LINK USB driver
You'll also need to install [this USB driver] or OpenOCD won't work. Follow the installer instructions and make sure you install the right version (32-bit or 64-bit) of the driver.
[this USB driver]: http://www.st.com/en/embedded-software/stsw-link009.html
That's all! Go to the [next section].
[next section]: verify.md

64
src/intro/install_en.md Normal file
View File

@ -0,0 +1,64 @@
# Installing the tools
This page contains OS-agnostic installation instructions for a few of the tools:
### Rust Toolchain
Install rustup by following the instructions at [https://rustup.rs](https://rustup.rs).
**NOTE** Make sure you have a compiler version equal to or newer than `1.31`. `rustc
-V` should return a date newer than the one shown below.
``` console
$ rustc -V
rustc 1.31.1 (b6c32da9b 2018-12-18)
```
For bandwidth and disk usage concerns the default installation only supports
native compilation. To add cross compilation support for the ARM Cortex-M
architectures choose one of the following compilation targets. For the STM32F3DISCOVERY
board used for the examples in this book, use the final `thumbv7em-none-eabihf` target.
Cortex-M0, M0+, and M1 (ARMv6-M architecture):
``` console
$ rustup target add thumbv6m-none-eabi
```
Cortex-M3 (ARMv7-M architecture):
``` console
$ rustup target add thumbv7m-none-eabi
```
Cortex-M4 and M7 without hardware floating point (ARMv7E-M architecture):
``` console
$ rustup target add thumbv7em-none-eabi
```
Cortex-M4F and M7F with hardware floating point (ARMv7E-M architecture):
``` console
$ rustup target add thumbv7em-none-eabihf
```
### `cargo-binutils`
``` console
$ cargo install cargo-binutils
$ rustup component add llvm-tools-preview
```
### `cargo-generate`
We'll use this later to generate a project from a template.
``` console
$ cargo install cargo-generate
```
### OS-Specific Instructions
Now follow the instructions specific to the OS you are using:
- [Linux](install/linux.md)
- [Windows](install/windows.md)
- [macOS](install/macos.md)

View File

@ -1,64 +1,45 @@
# A `no_std` Rust Environment
# `no_std` 环境
The term Embedded Programming is used for a wide range of different classes of programming.
Ranging from programming 8-Bit MCUs (like the [ST72325xx](https://www.st.com/resource/en/datasheet/st72325j6.pdf))
with just a few KB of RAM and ROM, up to systems like the Raspberry Pi
([Model B 3+](https://en.wikipedia.org/wiki/Raspberry_Pi#Specifications)) which has a 32/64-bit
4-core Cortex-A53 @ 1.4 GHz and 1GB of RAM. Different restrictions/limitations will apply when writing code
depending on what kind of target and use case you have.
术语嵌入式编程用于各种不同类型的处理器。从仅有几KB的RAM和ROM的8位MCU(例如[ST72325xx](https://www.st.com/resource/zh/datasheet/st72325j6.pdf))到拥有Cortex-A53处理器(该处理器有四个核心,主频1.4G赫兹)和1GB RAM的树莓派等系统([Model B 3+](https://en.wikipedia.org/wiki/Raspberry_Pi#Specifications))。编写代码时的各种不同限制完全取决于您的目标系统环境。
There are two general Embedded Programming classifications:
有两种常规的嵌入式编程分类:
## Hosted Environments
These kinds of environments are close to a normal PC environment.
What this means is that you are provided with a System Interface [E.G. POSIX](https://en.wikipedia.org/wiki/POSIX)
that provides you with primitives to interact with various systems, such as file systems, networking, memory management, threads, etc.
Standard libraries in turn usually depend on these primitives to implement their functionality.
You may also have some sort of sysroot and restrictions on RAM/ROM-usage, and perhaps some
special HW or I/Os. Overall it feels like coding on a special-purpose PC environment.
## 托管环境
这些环境接近普通的PC环境。这意味着有操作系统支持[比如 POSIX](https://en.wikipedia.org/wiki/POSIX), 包括与各种系统资源进行交互的原语例如文件系统网络内存管理线程等。反过来标准库通常依靠这些原语来实现其功能。您可能还具有某种sysroot和对RAM/ROM使用的限制也许还有一些特殊的硬件或I/O外设。总体而言感觉就像在专用PC环境中进行编码。
## Bare Metal Environments
In a bare metal environment no code has been loaded before your program.
Without the software provided by an OS we can not load the standard library.
Instead the program, along with the crates it uses, can only use the hardware (bare metal) to run.
To prevent rust from loading the standard library use `no_std`.
The platform-agnostic parts of the standard library are available through [libcore](https://doc.rust-lang.org/core/).
libcore also excludes things which are not always desirable in an embedded environment.
One of these things is a memory allocator for dynamic memory allocation.
If you require this or any other functionalities there are often crates which provide these.
## 裸机环境
### The libstd Runtime
As mentioned before using [libstd](https://doc.rust-lang.org/std/) requires some sort of system integration, but this is not only because
[libstd](https://doc.rust-lang.org/std/) is just providing a common way of accessing OS abstractions, it also provides a runtime.
This runtime, among other things, takes care of setting up stack overflow protection, processing command line arguments,
and spawning the main thread before a program's main function is invoked. This runtime also won't be available in a `no_std` environment.
在裸机环境中,系统在运行你的代码之前,没有未加载任何代码。因为没有操作系统的支持,我们将无法使用标准库。
相反程序及其使用的crate只能直接使用硬件(裸机)来运行。为了防止Rust加载标准库必须使用`no_std`。可通过[核心库](https://doc.rust-lang.org/core/)获得标准库中与平台无关的部分。核心库还排除了嵌入式环境中并不总是需要的东西。其中之一是用于动态内存分配的内存分配器。如果您需要此功能或任何其他功能通常会有第三方crate实现。
## Summary
`#![no_std]` is a crate-level attribute that indicates that the crate will link to the core-crate instead of the std-crate.
The [libcore](https://doc.rust-lang.org/core/) crate in turn is a platform-agnostic subset of the std crate
which makes no assumptions about the system the program will run on.
As such, it provides APIs for language primitives like floats, strings and slices, as well as APIs that expose processor features
like atomic operations and SIMD instructions. However it lacks APIs for anything that involves platform integration.
Because of these properties no\_std and [libcore](https://doc.rust-lang.org/core/) code can be used for any kind of
bootstrapping (stage 0) code like bootloaders, firmware or kernels.
### Overview
### 标准库运行时
如前所述,使用[标准库]需要某种类型的系统集成,但这不仅是因为[标准库] 提供了一种访问操作系统抽象的通用方法它还提供了一个运行时。该运行时还负责设置堆栈溢出保护处理命令行参数并在调用程序的main函数之前生成主线程。这些功能在`no_std`环境中都无法提供。
| feature | no\_std | std |
|-----------------------------------------------------------|--------|-----|
| heap (dynamic memory) | * | ✓ |
| collections (Vec, HashMap, etc) | ** | ✓ |
| stack overflow protection | ✘ | ✓ |
| runs init code before main | ✘ | ✓ |
| libstd available | ✘ | ✓ |
| libcore available | ✓ | ✓ |
| writing firmware, kernel, or bootloader code | ✓ | ✘ |
[标准库]:(https://doc.rust-lang.org/std/)
\* Only if you use the `alloc` crate and use a suitable allocator like [alloc-cortex-m].
## 总结
`#![no_std]` 是一个crate级属性指示该crate将链接到核心库而不是标准库。[核心库]是标准库的与平台无关的子集,它不对程序运行的系统做任何假设。它只提供了语言相关(例如浮点数,字符串和切片)的API以及处理器功能(例如原子操作和SIMD指令)的API。但是它缺少涉及平台集成的任何东西的API。 由于这些属性,`no_std`和[核心库]代码可用于任何类型的引导程序(阶段0)代码例如bootloader固件或内核。
\** Only if you use the `collections` crate and configure a global default allocator.
[核心库]:(https://doc.rust-lang.org/core/)
### 总结
|功能 | no\_std |标准|
| ----------------------------------------------------------- | -------- | ----- |
|堆(动态内存)| * | ✓|
|集合(VecHashMap等)| ** | ✓|
|堆栈溢出保护| ✘| ✓|
|在main之前运行初始化代码| ✘| ✓|
| libstd可用| ✘| ✓|
| libcore可用| ✓| ✓|
|编写固件,内核或引导程序代码| ✓| ✘|
\* 仅当您使用`alloc` crate并使用合适的分配器(如[alloc-cortex-m])时。
\** 仅当您使用`collections`crate并配置全局默认分配器时。
[alloc-cortex-m]: https://github.com/rust-embedded/alloc-cortex-m
## See Also
* [RFC-1184](https://github.com/rust-lang/rfcs/blob/master/text/1184-stabilize-no_std.md)
## 其他资料
* [RFC-1184](https://github.com/rust-lang/rfcs/blob/master/text/1184-stabilize-no_std.md)

64
src/intro/no-std_en.md Normal file
View File

@ -0,0 +1,64 @@
# A `no_std` Rust Environment
The term Embedded Programming is used for a wide range of different classes of programming.
Ranging from programming 8-Bit MCUs (like the [ST72325xx](https://www.st.com/resource/en/datasheet/st72325j6.pdf))
with just a few KB of RAM and ROM, up to systems like the Raspberry Pi
([Model B 3+](https://en.wikipedia.org/wiki/Raspberry_Pi#Specifications)) which has a 32/64-bit
4-core Cortex-A53 @ 1.4 GHz and 1GB of RAM. Different restrictions/limitations will apply when writing code
depending on what kind of target and use case you have.
There are two general Embedded Programming classifications:
## Hosted Environments
These kinds of environments are close to a normal PC environment.
What this means is that you are provided with a System Interface [E.G. POSIX](https://en.wikipedia.org/wiki/POSIX)
that provides you with primitives to interact with various systems, such as file systems, networking, memory management, threads, etc.
Standard libraries in turn usually depend on these primitives to implement their functionality.
You may also have some sort of sysroot and restrictions on RAM/ROM-usage, and perhaps some
special HW or I/Os. Overall it feels like coding on a special-purpose PC environment.
## Bare Metal Environments
In a bare metal environment no code has been loaded before your program.
Without the software provided by an OS we can not load the standard library.
Instead the program, along with the crates it uses, can only use the hardware (bare metal) to run.
To prevent rust from loading the standard library use `no_std`.
The platform-agnostic parts of the standard library are available through [libcore](https://doc.rust-lang.org/core/).
libcore also excludes things which are not always desirable in an embedded environment.
One of these things is a memory allocator for dynamic memory allocation.
If you require this or any other functionalities there are often crates which provide these.
### The libstd Runtime
As mentioned before using [libstd](https://doc.rust-lang.org/std/) requires some sort of system integration, but this is not only because
[libstd](https://doc.rust-lang.org/std/) is just providing a common way of accessing OS abstractions, it also provides a runtime.
This runtime, among other things, takes care of setting up stack overflow protection, processing command line arguments,
and spawning the main thread before a program's main function is invoked. This runtime also won't be available in a `no_std` environment.
## Summary
`#![no_std]` is a crate-level attribute that indicates that the crate will link to the core-crate instead of the std-crate.
The [libcore](https://doc.rust-lang.org/core/) crate in turn is a platform-agnostic subset of the std crate
which makes no assumptions about the system the program will run on.
As such, it provides APIs for language primitives like floats, strings and slices, as well as APIs that expose processor features
like atomic operations and SIMD instructions. However it lacks APIs for anything that involves platform integration.
Because of these properties no\_std and [libcore](https://doc.rust-lang.org/core/) code can be used for any kind of
bootstrapping (stage 0) code like bootloaders, firmware or kernels.
### Overview
| feature | no\_std | std |
|-----------------------------------------------------------|--------|-----|
| heap (dynamic memory) | * | ✓ |
| collections (Vec, HashMap, etc) | ** | ✓ |
| stack overflow protection | ✘ | ✓ |
| runs init code before main | ✘ | ✓ |
| libstd available | ✘ | ✓ |
| libcore available | ✓ | ✓ |
| writing firmware, kernel, or bootloader code | ✓ | ✘ |
\* Only if you use the `alloc` crate and use a suitable allocator like [alloc-cortex-m].
\** Only if you use the `collections` crate and configure a global default allocator.
[alloc-cortex-m]: https://github.com/rust-embedded/alloc-cortex-m
## See Also
* [RFC-1184](https://github.com/rust-lang/rfcs/blob/master/text/1184-stabilize-no_std.md)

View File

@ -1,84 +1,53 @@
# Tooling
# 其他工具
Dealing with microcontrollers involves using several different tools as we'll be
dealing with an architecture different than your laptop's and we'll have to run
and debug programs on a *remote* device.
进行嵌入式开发和在pc上进行开发不太一样,一般来说,你必须在远程设备上进行运行和调试,所以需要一些专门的控制器相关的工具的支持.
We'll use all the tools listed below. Any recent version should work when a
minimum version is not specified, but we have listed the versions we have
tested.
我们将使用下面列出的所有工具。我们列出了我们拥有的经过测试的版本,当然一般来说更高的版本也是可以的。
- Rust 1.31, 1.31-beta, or a newer toolchain PLUS ARM Cortex-M compilation
support.
- [`cargo-binutils`](https://github.com/rust-embedded/cargo-binutils) ~0.1.4
- [`qemu-system-arm`](https://www.qemu.org/). Tested versions: 3.0.0
- OpenOCD >=0.8. Tested versions: v0.9.0 and v0.10.0
- GDB with ARM support. Version 7.12 or newer highly recommended. Tested
versions: 7.10, 7.11, 7.12 and 8.1
- [`cargo-generate`](https://github.com/ashleygwilliams/cargo-generate) or `git`.
These tools are optional but will make it easier to follow along with the book.
- Rust 1.31、1.31-beta或更新的工具链以及ARM Cortex-M编译支持。
- [`cargo-binutils`](https://github.com/rust-embedded/cargo-binutils)〜0.1.4
- [`qemu-system-arm`](https://www.qemu.org/)经过测试的版本: 3.0.0
- OpenOCD> = 0.8。经过测试的版本v0.9.0和v0.10.0
- 具有ARM支持的GDB。强烈建议使用7.12版或更高版本。经过测试版本7.10、7.11、7.12和8.1
- [`cargo-generate`](https://github.com/ashleygwilliams/cargo-generate)或`git`。这些工具是可选的,它能够让我们更容易使用书上的例子.
The text below explains why we are using these tools. Installation instructions
can be found on the next page.
## `cargo-generate` OR `git`
## `cargo-generate`或`git`
Bare metal programs are non-standard (`no_std`) Rust programs that require some
adjustments to the linking process in order to get the memory layout of the program
right. This requires some additional files (like linker scripts) and
settings (like linker flags). We have packaged those for you in a template
such that you only need to fill in the missing information (such as the project name and the
characteristics of your target hardware).
裸机程序是非标准(`no_std`)Rust程序一般需要介入链接过程以修正程序的内存布局。这需要一些其他文件(例如链接器脚本)和设置(链接参数)。我们已经为您打包了这些模板,这样您只需要填写缺少的信息(例如项目名称和目标硬件的特性)。
Our template is compatible with `cargo-generate`: a Cargo subcommand for
creating new Cargo projects from templates. You can also download the
template using `git`, `curl`, `wget`, or your web browser.
我们的模板兼容`cargo-generate`(这是一个cargo的子命令)。您也可以使用`git``curl``wget`或浏览器来下载模板。
## `cargo-binutils`
`cargo-binutils` is a collection of Cargo subcommands that make it easy to use
the LLVM tools that are shipped with the Rust toolchain. These tools include the
LLVM versions of `objdump`, `nm` and `size` and are used for inspecting
binaries.
`cargo-binutils`是一系列Cargo子命令的集合通过它们可以避免直接与Rust工具链附带的LLVM工具打交道,它包括objdumpnm和size等用于检查二进制文件的工具.
The advantage of using these tools over GNU binutils is that (a) installing the
LLVM tools is the same one-command installation (`rustup component add
llvm-tools-preview`) regardless of your OS and (b) tools like `objdump` support
all the architectures that `rustc` supports -- from ARM to x86_64 -- because
they both share the same LLVM backend.
与GNU binutils相比使用这些工具的优势在于:
- 安装简单,无论什么系统,一条命令(`rustup component add llvm-tools-preview`)与LLVM工具一同安装
- 像`objdump`的这样的工具与rustc一样支持所有的架构(从ARM到x86_64),因为它们都共享相同的LLVM后端。
## `qemu-system-arm`
QEMU is an emulator. In this case we use the variant that can fully emulate ARM
systems. We use QEMU to run embedded programs on the host. Thanks to this you
can follow some parts of this book even if you don't have any hardware with you!
QEMU是一个通用模拟器,使用它可以完全模拟ARM处理器,这样可以在主机上运行嵌入式程序。幸亏有了 QEMU,这样就算是你没有任何硬件,也可以运行本书的部分示例!
## GDB
调试器对于嵌入式开发非常重要,因为可能你都无法保证能够向控制台打印日志. 甚至有时你的硬件平台都没有提供闪烁的LED灯.
A debugger is a very important component of embedded development as you may not
always have the luxury to log stuff to the host console. In some cases, you may
not even have LEDs to blink on your hardware!
In general, LLDB works as well as GDB when it comes to debugging but we haven't
found an LLDB counterpart to GDB's `load` command, which uploads the program to
the target hardware, so currently we recommend that you use GDB.
通常在调试方面LLDB和GDB一样好但是我们还没有找到了与GDB的“ load”命令相对应的LLDB命令该命令可以将程序上传到目标硬件因此当前我们建议您使用GDB。
## OpenOCD
GDB isn't able to communicate directly with the ST-Link debugging hardware on
your STM32F3DISCOVERY development board. It needs a translator and the Open
On-Chip Debugger, OpenOCD, is that translator. OpenOCD is a program that runs
on your laptop/PC and translates between GDB's TCP/IP based remote debug
protocol and ST-Link's USB based protocol.
OpenOCD also performs other important work as part of its translation for the
debugging of the ARM Cortex-M based microcontroller on your STM32F3DISCOVERY
development board:
* It knows how to interact with the memory mapped registers used by the ARM
CoreSight debug peripheral. It is these CoreSight registers that allow for:
* Breakpoint/Watchpoint manipulation
* Reading and writing of the CPU registers
* Detecting when the CPU has been halted for a debug event
* Continuing CPU execution after a debug event has been encountered
* etc.
* It also knows how to erase and write to the microcontroller's FLASH
GDB无法直接与STM32F3DISCOVERY开发板上的ST-Link调试硬件进行通信。它需要一个翻译器而开放式片上调试器OpenOCD就是那个翻译器。 OpenOCD运行在PC上可在基于TCP/IP的GDB远程调试协议和基于USB的ST-Link协议之间进行转换。
OpenOCD还执行其他重要工作
* 它知道如何与用于ARM CoreSight调试外围设备使用的内存映射寄存器进行交互。这些CoreSight寄存器允许
+ 断点/观察点操作
+ 读取和写入CPU寄存器
+ 检测CPU何时因调试事件而暂停
+ 遇到调试事件后继续执行CPU
+ 其他功能
* 它也知道如何擦除和写入微控制器的FLASH

77
src/intro/tooling_en.md Normal file
View File

@ -0,0 +1,77 @@
# Tooling
Dealing with microcontrollers involves using several different tools as we'll be
dealing with an architecture different than your laptop's and we'll have to run
and debug programs on a *remote* device.
We'll use all the tools listed below. Any recent version should work when a
minimum version is not specified, but we have listed the versions we have
tested.
- Rust 1.31, 1.31-beta, or a newer toolchain PLUS ARM Cortex-M compilation
support.
- [`cargo-binutils`](https://github.com/rust-embedded/cargo-binutils) ~0.1.4
- [`qemu-system-arm`](https://www.qemu.org/). Tested versions: 3.0.0
- OpenOCD >=0.8. Tested versions: v0.9.0 and v0.10.0
- GDB with ARM support. Version 7.12 or newer highly recommended. Tested
versions: 7.10, 7.11, 7.12 and 8.1
- [`cargo-generate`](https://github.com/ashleygwilliams/cargo-generate) or `git`.
These tools are optional but will make it easier to follow along with the book.
The text below explains why we are using these tools. Installation instructions
can be found on the next page.
## `cargo-generate` OR `git`
Bare metal programs are non-standard (`no_std`) Rust programs that require some
adjustments to the linking process in order to get the memory layout of the program
right. This requires some additional files (like linker scripts) and
settings (like linker flags). We have packaged those for you in a template
such that you only need to fill in the missing information (such as the project name and the
characteristics of your target hardware).
Our template is compatible with `cargo-generate`: a Cargo subcommand for
creating new Cargo projects from templates. You can also download the
template using `git`, `curl`, `wget`, or your web browser.
## `cargo-binutils`
`cargo-binutils` is a collection of Cargo subcommands that make it easy to use
the LLVM tools that are shipped with the Rust toolchain. These tools include the
LLVM versions of `objdump`, `nm` and `size` and are used for inspecting
binaries.
The advantage of using these tools over GNU binutils is that (a) installing the
LLVM tools is the same one-command installation (`rustup component add
llvm-tools-preview`) regardless of your OS and (b) tools like `objdump` support
all the architectures that `rustc` supports -- from ARM to x86_64 -- because
they both share the same LLVM backend.
## `qemu-system-arm`
QEMU is an emulator. In this case we use the variant that can fully emulate ARM
systems. We use QEMU to run embedded programs on the host. Thanks to this you
can follow some parts of this book even if you don't have any hardware with you!
## GDB
A debugger is a very important component of embedded development as you may not
always have the luxury to log stuff to the host console. In some cases, you may
not even have LEDs to blink on your hardware!
In general, LLDB works as well as GDB when it comes to debugging but we haven't
found an LLDB counterpart to GDB's `load` command, which uploads the program to
the target hardware, so currently we recommend that you use GDB.
## OpenOCD
GDB isn't able to communicate directly with the ST-Link debugging hardware on your STM32F3DISCOVERY development board. It needs a translator and the Open On-Chip Debugger, OpenOCD, is that translator. OpenOCD is a program that runs on your laptop/PC and translates between GDB's TCP/IP based remote debug protocol and ST-Link's USB based protocol.
OpenOCD also performs other important work as part of its translation for the debugging of the ARM Cortex-M based microcontroller on your STM32F3DISCOVERY development board:
* It knows how to interact with the memory mapped registers used by the ARM CoreSight debug peripheral. It is these CoreSight registers that allow for:
* Breakpoint/Watchpoint manipulation
* Reading and writing of the CPU registers
* Detecting when the CPU has been halted for a debug event
* Continuing CPU execution after a debug event has been encountered
* etc.
* It also knows how to erase and write to the microcontroller's FLASH

View File

@ -1,23 +1,23 @@
# A First Attempt
# 初试Rust
## The Registers
## 寄存器
Let's look at the 'SysTick' peripheral - a simple timer which comes with every Cortex-M processor core. Typically you'll be looking these up in the chip manufacturer's data sheet or *Technical Reference Manual*, but this example is common to all ARM Cortex-M cores, let's look in the [ARM reference manual]. We see there are four registers:
让我们看一下`SysTick`外设(每个Cortex-M处理器内核随附的简单计时器)。通常您会在芯片制造商的《技术参考手册》中查找这些信息但是此示例对于所有ARM Cortex-M内核都是通用的因此也可以在[ARM参考手册]中进行查到。我们看到有四个寄存器:
[ARM reference manual]: http://infocenter.arm.com/help/topic/com.arm.doc.dui0553a/Babieigh.html
[ARM参考手册]:http://infocenter.arm.com/help/topic/com.arm.doc.dui0553a/Babieigh.html
| Offset | Name | Description | Width |
|--------|-------------|-----------------------------|--------|
| 0x00 | SYST_CSR | Control and Status Register | 32 bits|
| 0x04 | SYST_RVR | Reload Value Register | 32 bits|
| 0x08 | SYST_CVR | Current Value Register | 32 bits|
| 0x0C | SYST_CALIB | Calibration Value Register | 32 bits|
|偏移|名称|描述|位宽|
| -------- | ------------- | -------------------------- --- | -------- |
| 0x00 | SYST_CSR |控制和状态寄存器| 32位|
| 0x04 | SYST_RVR |重新加载值寄存器| 32位|
| 0x08 | SYST_CVR |当前值寄存器| 32位|
| 0x0C | SYST_CALIB |校准值寄存器| 32位|
## The C Approach
## C方法
In Rust, we can represent a collection of registers in exactly the same way as we do in C - with a `struct`.
在Rust中我们可以用像C语言一样使用`struct`表示一系列寄存器。
```rust,ignore
```rust , ignore
#[repr(C)]
struct SysTick {
pub csr: u32,
@ -27,35 +27,35 @@ struct SysTick {
}
```
The qualifier `#[repr(C)]` tells the Rust compiler to lay this structure out like a C compiler would. That's very important, as Rust allows structure fields to be re-ordered, while C does not. You can imagine the debugging we'd have to do if these fields were silently re-arranged by the compiler! With this qualifier in place, we have our four 32-bit fields which correspond to the table above. But of course, this `struct` is of no use by itself - we need a variable.
限定符`#[repr(C)]`告诉Rust编译器像C编译器那样布局此结构体。这非常重要因为Rust允许对结构体字段进行重新排序而C不允许。您可以想象如果编译器以静默方式重新排列了这些字段我们调试起来会有多困难有了此限定符后我们就有四个32位字段它们与上表相对应。但是当然这个 `struct` 本身是没有用的-我们需要一个变量。
```rust,ignore
```rust , ignore
let systick = 0xE000_E010 as *mut SysTick;
let time = unsafe { (*systick).cvr };
```
## Volatile Accesses
## 易失性访问
Now, there are a couple of problems with the approach above.
现在,上述方法存在以下问题:
1. We have to use unsafe every time we want to access our Peripheral.
2. We've got no way of specifying which registers are read-only or read-write.
3. Any piece of code anywhere in your program could access the hardware
through this structure.
4. Most importantly, it doesn't actually work...
1. 每次访问外设时我们都必须使用unsafe关键字。
2. 我们无法指定哪些寄存器是只读或读写寄存器。
3. 程序中任何地方的任何代码段都可以通过这种结构访问硬件。
4. 最重要的是,它实际上不起作用...
Now, the problem is that compilers are clever. If you make two writes to the same piece of RAM, one after the other, the compiler can notice this and just skip the first write entirely. In C, we can mark variables as `volatile` to ensure that every read or write occurs as intended. In Rust, we instead mark the *accesses* as volatile, not the variable.
现在的问题是编译器很聪明。如果您对同一块RAM紧挨着进行两次写入则编译器会注意到这一点并且会跳过第一次写入。在C语言中我们可以将变量标记为`volatile`以确保每次读取或写入均按预期进行。在Rust中我们则是将**访问本身**标记为volatile而不是变量。
```rust,ignore
```rust , ignore
let systick = unsafe { &mut *(0xE000_E010 as *mut SysTick) };
let time = unsafe { core::ptr::read_volatile(&mut systick.cvr) };
```
So, we've fixed one of our four problems, but now we have even more `unsafe` code! Fortunately, there's a third party crate which can help - [`volatile_register`].
现在,我们已经解决了四个问题之一,但是现在我们有了更多的`unsafe`代码幸运的是第三方crate[ʻvolatile_register`]可以提供帮助。
[`volatile_register`]: https://crates.io/crates/volatile_register
[`volatile_register`]:https//crates.io/crates/volatile_register
```rust,ignore
```rust , ignore
use volatile_register::{RW, RO};
#[repr(C)]
@ -76,15 +76,15 @@ fn get_time() -> u32 {
}
```
Now, the volatile accesses are performed automatically through the `read` and `write` methods. It's still `unsafe` to perform writes, but to be fair, hardware is a bunch of mutable state and there's no way for the compiler to know whether these writes are actually safe, so this is a good default position.
现在,通过`read`和`write`方法会自动执行易失性(volatile)访问。但是执行写入仍然是`unsafe`,公平地说,硬件是一堆易变的状态,编译器无法知道这些写入是否实际上是安全的,因此这是一个很好的默认设置。
## The Rusty Wrapper
## Rust封装
We need to wrap this `struct` up into a higher-layer API that is safe for our users to call. As the driver author, we manually verify the unsafe code is correct, and then present a safe API for our users so they don't have to worry about it (provided they trust us to get it right!).
我们需要将此`struct`封装到一个更高层API中以使我们的用户可以安全地调用它。作为驱动程序开发者我们手动验证不安全的代码是否正确然后为我们的用户提供一个安全的API以便他们不必担心它(只要他们相信我们是正确的!)。
One example might be:
一个示例可能是:
```rust,ignore
```rust , ignore
use volatile_register::{RW, RO};
pub struct SystemTimer {
@ -122,9 +122,9 @@ pub fn example_usage() -> String {
}
```
Now, the problem with this approach is that the following code is perfectly acceptable to the compiler:
但是这种方法的问题在于以下有问题的代码可以正常编译:
```rust,ignore
```rust , ignore
fn thread1() {
let mut st = SystemTimer::new();
st.set_reload(2000);
@ -136,4 +136,4 @@ fn thread2() {
}
```
Our `&mut self` argument to the `set_reload` function checks that there are no other references to *that* particular `SystemTimer` struct, but they don't stop the user creating a second `SystemTimer` which points to the exact same peripheral! Code written in this fashion will work if the author is diligent enough to spot all of these 'duplicate' driver instances, but once the code is spread out over multiple modules, drivers, developers, and days, it gets easier and easier to make these kinds of mistakes.
set_reload函数的`mut self`参数确保没有其他对这个特定的`SystemTimer`实例的引用,但是它不会阻止用户创建第二个`SystemTimer`实例,明显它们指向完全相同的外设! 当然如果程序员很努力的避免创建多个实例,则以这种方式编写的代码也可以工作,但是一旦代码分散到不同模块,不同驱动程序,由多个程序员维护,则难免会出现各种错误.。

View File

@ -0,0 +1,139 @@
# A First Attempt
## The Registers
Let's look at the 'SysTick' peripheral - a simple timer which comes with every Cortex-M processor core. Typically you'll be looking these up in the chip manufacturer's data sheet or *Technical Reference Manual*, but this example is common to all ARM Cortex-M cores, let's look in the [ARM reference manual]. We see there are four registers:
[ARM reference manual]: http://infocenter.arm.com/help/topic/com.arm.doc.dui0553a/Babieigh.html
| Offset | Name | Description | Width |
|--------|-------------|-----------------------------|--------|
| 0x00 | SYST_CSR | Control and Status Register | 32 bits|
| 0x04 | SYST_RVR | Reload Value Register | 32 bits|
| 0x08 | SYST_CVR | Current Value Register | 32 bits|
| 0x0C | SYST_CALIB | Calibration Value Register | 32 bits|
## The C Approach
In Rust, we can represent a collection of registers in exactly the same way as we do in C - with a `struct`.
```rust , ignore
#[repr(C)]
struct SysTick {
pub csr: u32,
pub rvr: u32,
pub cvr: u32,
pub calib: u32,
}
```
The qualifier `#[repr(C)]` tells the Rust compiler to lay this structure out like a C compiler would. That's very important, as Rust allows structure fields to be re-ordered, while C does not. You can imagine the debugging we'd have to do if these fields were silently re-arranged by the compiler! With this qualifier in place, we have our four 32-bit fields which correspond to the table above. But of course, this `struct` is of no use by itself - we need a variable.
```rust , ignore
let systick = 0xE000_E010 as *mut SysTick;
let time = unsafe { (*systick).cvr };
```
## Volatile Accesses
Now, there are a couple of problems with the approach above.
1. We have to use unsafe every time we want to access our Peripheral.
2. We've got no way of specifying which registers are read-only or read-write.
3. Any piece of code anywhere in your program could access the hardware
through this structure.
4. Most importantly, it doesn't actually work...
Now, the problem is that compilers are clever. If you make two writes to the same piece of RAM, one after the other, the compiler can notice this and just skip the first write entirely. In C, we can mark variables as `volatile` to ensure that every read or write occurs as intended. In Rust, we instead mark the *accesses* as volatile, not the variable.
```rust , ignore
let systick = unsafe { &mut *(0xE000_E010 as *mut SysTick) };
let time = unsafe { core::ptr::read_volatile(&mut systick.cvr) };
```
So, we've fixed one of our four problems, but now we have even more `unsafe` code! Fortunately, there's a third party crate which can help - [`volatile_register`].
[`volatile_register`]: https://crates.io/crates/volatile_register
```rust , ignore
use volatile_register::{RW, RO};
#[repr(C)]
struct SysTick {
pub csr: RW<u32>,
pub rvr: RW<u32>,
pub cvr: RW<u32>,
pub calib: RO<u32>,
}
fn get_systick() -> &'static mut SysTick {
unsafe { &mut *(0xE000_E010 as *mut SysTick) }
}
fn get_time() -> u32 {
let systick = get_systick();
systick.cvr.read()
}
```
Now, the volatile accesses are performed automatically through the `read` and `write` methods. It's still `unsafe` to perform writes, but to be fair, hardware is a bunch of mutable state and there's no way for the compiler to know whether these writes are actually safe, so this is a good default position.
## The Rusty Wrapper
We need to wrap this `struct` up into a higher-layer API that is safe for our users to call. As the driver author, we manually verify the unsafe code is correct, and then present a safe API for our users so they don't have to worry about it (provided they trust us to get it right!).
One example might be:
```rust , ignore
use volatile_register::{RW, RO};
pub struct SystemTimer {
p: &'static mut RegisterBlock
}
#[repr(C)]
struct RegisterBlock {
pub csr: RW<u32>,
pub rvr: RW<u32>,
pub cvr: RW<u32>,
pub calib: RO<u32>,
}
impl SystemTimer {
pub fn new() -> SystemTimer {
SystemTimer {
p: unsafe { &mut *(0xE000_E010 as *mut RegisterBlock) }
}
}
pub fn get_time(&self) -> u32 {
self.p.cvr.read()
}
pub fn set_reload(&mut self, reload_value: u32) {
unsafe { self.p.rvr.write(reload_value) }
}
}
pub fn example_usage() -> String {
let mut st = SystemTimer::new();
st.set_reload(0x00FF_FFFF);
format!("Time is now 0x{:08x}", st.get_time())
}
```
Now, the problem with this approach is that the following code is perfectly acceptable to the compiler:
```rust , ignore
fn thread1() {
let mut st = SystemTimer::new();
st.set_reload(2000);
}
fn thread2() {
let mut st = SystemTimer::new();
st.set_reload(1000);
}
```
Our `&mut self` argument to the `set_reload` function checks that there are no other references to *that* particular `SystemTimer` struct, but they don't stop the user creating a second `SystemTimer` which points to the exact same peripheral! Code written in this fashion will work if the author is diligent enough to spot all of these 'duplicate' driver instances, but once the code is spread out over multiple modules, drivers, developers, and days, it gets easier and easier to make these kinds of mistakes.

View File

@ -1,19 +1,19 @@
## Mutable Global State
## 全局可变状态
Unfortunately, hardware is basically nothing but mutable global state, which can feel very frightening for a Rust developer. Hardware exists independently from the structures of the code we write, and can be modified at any time by the real world.
不幸的是硬件基本上只不过是可变的全局状态这可能会让Rust开发人员来感到非常棘手。但是硬件本来就独立于我们编写的结构体代码并且在现实世界中就是随时可以进行修改。
## What should our rules be?
## 我们的规则应该是什么?
How can we reliably interact with these peripherals?
我们如何与这些外围设备可靠地交互?
1. Always use `volatile` methods to read or write to peripheral memory, as it can change at any time
2. In software, we should be able to share any number of read-only accesses to these peripherals
3. If some software should have read-write access to a peripheral, it should hold the only reference to that peripheral
1. 始终使用`volatile`方法读取或写入外围存储器,因为它随时可能发生变化
2. 在软件中,应该允许同时存在对这些外设的任意数量的只读访问
3. 如果某些软件需要对外设的读写访问权限,则它应该持有该外设的唯一引用
## The Borrow Checker
## 借用检查器
The last two of these rules sound suspiciously similar to what the Borrow Checker does already!
这些规则中的最后两个听起来和借用检查器的工作机制非常类似!
Imagine if we could pass around ownership of these peripherals, or offer immutable or mutable references to them?
想像一下我们是否可以放弃这些外设的所有权,或者提供对它们的不变或可变的引用?
Well, we can, but for the Borrow Checker, we need to have exactly one instance of each peripheral, so Rust can handle this correctly. Well, luckliy in the hardware, there is only one instance of any given peripheral, but how can we expose that in the structure of our code?
好吧,我们可以. 但是对于借用检查器我们需要每个外围设备都只有一个实例以便Rust可以正确处理。 幸运的是,在硬件中,任何给定的外设都只有一个实例,但是如何设计访问接口呢?

View File

@ -0,0 +1,19 @@
## Mutable Global State
Unfortunately, hardware is basically nothing but mutable global state, which can feel very frightening for a Rust developer. Hardware exists independently from the structures of the code we write, and can be modified at any time by the real world.
## What should our rules be?
How can we reliably interact with these peripherals?
1. Always use `volatile` methods to read or write to peripheral memory, as it can change at any time
2. In software, we should be able to share any number of read-only accesses to these peripherals
3. If some software should have read-write access to a peripheral, it should hold the only reference to that peripheral
## The Borrow Checker
The last two of these rules sound suspiciously similar to what the Borrow Checker does already!
Imagine if we could pass around ownership of these peripherals, or offer immutable or mutable references to them?
Well, we can, but for the Borrow Checker, we need to have exactly one instance of each peripheral, so Rust can handle this correctly. Well, luckliy in the hardware, there is only one instance of any given peripheral, but how can we expose that in the structure of our code?

View File

@ -1,44 +1,44 @@
# Peripherals
# 外设
## What are Peripherals?
## 什么是外围设备?
Most Microcontrollers have more than just a CPU, RAM, or Flash Memory - they contain sections of silicon which are used for interacting with systems outside of the microcontroller, as well as directly and indirectly interacting with their surroundings in the world via sensors, motor controllers, or human interfaces such as a display or keyboard. These components are collectively known as Peripherals.
大多数微控制器都是SoC,不仅仅具有CPURAM或闪存芯片内部还集成了各种设备,用于与外部系统进行交互. 比如通过传感器,电机控制器直接或间接与周围环境进行交互。 或其他的人机界面,例如显示器或键盘。这些组件统称为外围设备。
These peripherals are useful because they allow a developer to offload processing to them, avoiding having to handle everything in software. Similar to how a desktop developer would offload graphics processing to a video card, embedded developers can offload some tasks to peripherals allowing the CPU to spend its time doing something else important, or doing nothing in order to save power.
这些外围设备很有用因为它们使开发人员可以将一部分工作分派出去而不必全部由软件来实现。与台式机开发人员将图形处理任务分派给显卡的方式类似嵌入式开发人员可以将某些任务分派到外围设备从而使CPU可以将时间花在更重要的事情上或者不做任何事以节省功耗。
If you look at the main circuit board in an old-fashioned home computer from the 1970s or 1980s (and actually, the desktop PCs of yesterday are not so far removed from the embedded systems of today) you would expect to see:
如果您看一下1970年代或1980年代的老式家用计算机中的主板(实际上,以前的台式机与今天的嵌入式系统相去不远),您会看到:
* A processor
* A RAM chip
* A ROM chip
* An I/O controller
* 处理器
* RAM芯片
* ROM芯片
* I/O控制器
The RAM chip, ROM chip and I/O controller (the peripheral in this system) would be joined to the processor through a series of parallel traces known as a 'bus'. This bus carries address information, which selects which device on the bus the processor wishes to communicate with, and a data bus which carries the actual data. In our embedded microcontrollers, the same principles apply - it's just that everything is packed on to a single piece of silicon.
RAM芯片ROM芯片和I/O控制器(此系统中的外围设备)通过“总线”连接到处理器。处理器通过地址总线选择与哪个设备进行通信,通过数据总线传输数据。在我们的嵌入式微控制器中,原理都是一样的-只是将所有内容包装在一块芯片内。
However, unlike graphics cards, which typically have a Software API like Vulkan, Metal, or OpenGL, peripherals are exposed to our Microcontroller with a hardware interface, which is mapped to a chunk of the memory.
但是与显卡不同的是,显卡一般提供了像VulkanMetal或OpenGL之类的软件API而嵌入式外设通过内存映射的方式,直接将硬件接口暴露给我们的微控制器。
## Linear and Real Memory Space
## 线性和物理内存地址空间
On a microcontroller, writing some data to some other arbitrary address, such as `0x4000_0000` or `0x0000_0000`, may also be a completely valid action.
在微控制器上,将一些数据写入任意地址,例如`0x4000_0000`或`0x0000_0000`,也可能是完全有效的操作。
On a desktop system, access to memory is tightly controlled by the MMU, or Memory Management Unit. This component has two major responsibilities: enforcing access permission to sections of memory (preventing one process from reading or modifying the memory of another process); and re-mapping segments of the physical memory to virtual memory ranges used in software. Microcontrollers do not typically have an MMU, and instead only use real physical addresses in software.
在台式机系统上,对内存的访问由内存管理单元(MMU)严格控制,MMU有两个主要职责强制执行对内存的访问权限(防止一个进程读取或修改另一进程的内存)并将物理内存的地址重新映射到软件中使用的虚拟内存地址。微控制器通常没有MMU而仅使用实际物理地址。
Although 32 bit microcontrollers have a real and linear address space from `0x0000_0000`, and `0xFFFF_FFFF`, they generally only use a few hundred kilobytes of that range for actual memory. This leaves a significant amount of address space remaining. In earlier chapters, we were talking about RAM being located at address `0x2000_0000`. If our RAM was 64 KiB long (i.e. with a maximum address of 0xFFFF) then addresses `0x2000_0000` to `0x2000_FFFF` would correspond to our RAM. When we write to a variable which lives at address `0x2000_1234`, what happens internally is that some logic detects the upper portion of the address (0x2000 in this example) and then activates the RAM so that it can act upon the lower portion of the address (0x1234 in this case). On a Cortex-M we also have our Flash ROM mapped in at address `0x0000_0000` up to, say, address `0x0007_FFFF` (if we have a 512 KiB Flash ROM). Rather than ignore all remaining space between these two regions, Microcontroller designers instead mapped the interface for peripherals in certain memory locations. This ends up looking something like this:
尽管32位微控制器具有从0x0000_0000到0xFFFF_FFFF的物理和线性地址空间但它们通常仅使用该范围的几百K字节作为实际内存。这留下了大量的可用地址空间。在前面的章节中我们讨论了位于地址“0x2000_0000”上的RAM。如果我们的RAM大小为64 KiB(即最大地址为0xFFFF)则地址“0x2000_0000”到“0x2000_FFFF”将对应于我们的RAM。当我们写入位于地址“0x2000_1234”的变量时 某些逻辑检测地址的上半部分(在此示例中为0x2000)然后激活RAM由RAM来处理地址的下半部分(在这种情况下为0x1234)。在Cortex-M上我们将Flash ROM映射到地址“0x0000_0000”到地址“0x0007_FFFF”之间(如果我们有512 KiB Flash ROM)。微控制器设计人员没有忽略这两个区域之间的剩余地址空间,而是将某些内存位置映射给了外设。最终看起来像这样:
![](../assets/nrf52-memory-map.png)
[Nordic nRF52832 Datasheet (pdf)]
[Nordic nRF52832手册(pdf)]
## Memory Mapped Peripherals
## 内存映射的外围设备
Interaction with these peripherals is simple at a first glance - write the right data to the correct address. For example, sending a 32 bit word over a serial port could be as direct as writing that 32 bit word to a certain memory address. The Serial Port Peripheral would then take over and send out the data automatically.
乍看之下,与这些外设的交互非常简单-将正确的数据写入正确的地址。例如通过串行端口发送32位字可能与将32位字写入某个内存地址一样直接。然后串行端口外围设备将接管并自动发送数据。
Configuration of these peripherals works similarly. Instead of calling a function to configure a peripheral, a chunk of memory is exposed which serves as the hardware API. Write `0x8000_0000` to a SPI Frequency Configuration Register, and the SPI port will send data at 8 Megabits per second. Write `0x0200_0000` to the same address, and the SPI port will send data at 125 Kilobits per second. These configuration registers look a little bit like this:
这些外设的配置工作类似。无需调用函数来进行配置一个外设而是公开了一块用作硬件API的内存区域。比如将“0x8000_0000”写入SPI频率配置寄存器SPI端口将以每秒8兆位的速度发送数据。将“0x0200_0000”写入相同的地址SPI端口将以每秒125 Kilobits的速度发送数据。这些配置寄存器看起来像这样
![](../assets/nrf52-spi-frequency-register.png)
[Nordic nRF52832 Datasheet (pdf)]
[Nordic nRF52832手册(pdf)]
This interface is how interactions with the hardware are made, no matter what language is used, whether that language is Assembly, C, or Rust.
无论使用哪种语言无论该语言是AssemblyC还是Rust该接口都是与硬件进行交互的方式。
[Nordic nRF52832 Datasheet (pdf)]: http://infocenter.nordicsemi.com/pdf/nRF52832_PS_v1.1.pdf
[Nordic nRF52832手册(pdf)]: http://infocenter.nordicsemi.com/pdf/nRF52832_PS_v1.1.pdf

View File

@ -0,0 +1,44 @@
# Peripherals
## What are Peripherals?
Most Microcontrollers have more than just a CPU, RAM, or Flash Memory - they contain sections of silicon which are used for interacting with systems outside of the microcontroller, as well as directly and indirectly interacting with their surroundings in the world via sensors, motor controllers, or human interfaces such as a display or keyboard. These components are collectively known as Peripherals.
These peripherals are useful because they allow a developer to offload processing to them, avoiding having to handle everything in software. Similar to how a desktop developer would offload graphics processing to a video card, embedded developers can offload some tasks to peripherals allowing the CPU to spend its time doing something else important, or doing nothing in order to save power.
If you look at the main circuit board in an old-fashioned home computer from the 1970s or 1980s (and actually, the desktop PCs of yesterday are not so far removed from the embedded systems of today) you would expect to see:
* A processor
* A RAM chip
* A ROM chip
* An I/O controller
The RAM chip, ROM chip and I/O controller (the peripheral in this system) would be joined to the processor through a series of parallel traces known as a 'bus'. This bus carries address information, which selects which device on the bus the processor wishes to communicate with, and a data bus which carries the actual data. In our embedded microcontrollers, the same principles apply - it's just that everything is packed on to a single piece of silicon.
However, unlike graphics cards, which typically have a Software API like Vulkan, Metal, or OpenGL, peripherals are exposed to our Microcontroller with a hardware interface, which is mapped to a chunk of the memory.
## Linear and Real Memory Space
On a microcontroller, writing some data to some other arbitrary address, such as `0x4000_0000` or `0x0000_0000`, may also be a completely valid action.
On a desktop system, access to memory is tightly controlled by the MMU, or Memory Management Unit. This component has two major responsibilities: enforcing access permission to sections of memory (preventing one process from reading or modifying the memory of another process); and re-mapping segments of the physical memory to virtual memory ranges used in software. Microcontrollers do not typically have an MMU, and instead only use real physical addresses in software.
Although 32 bit microcontrollers have a real and linear address space from `0x0000_0000`, and `0xFFFF_FFFF`, they generally only use a few hundred kilobytes of that range for actual memory. This leaves a significant amount of address space remaining. In earlier chapters, we were talking about RAM being located at address `0x2000_0000`. If our RAM was 64 KiB long (i.e. with a maximum address of 0xFFFF) then addresses `0x2000_0000` to `0x2000_FFFF` would correspond to our RAM. When we write to a variable which lives at address `0x2000_1234`, what happens internally is that some logic detects the upper portion of the address (0x2000 in this example) and then activates the RAM so that it can act upon the lower portion of the address (0x1234 in this case). On a Cortex-M we also have our Flash ROM mapped in at address `0x0000_0000` up to, say, address `0x0007_FFFF` (if we have a 512 KiB Flash ROM). Rather than ignore all remaining space between these two regions, Microcontroller designers instead mapped the interface for peripherals in certain memory locations. This ends up looking something like this:
![](../assets/nrf52-memory-map.png)
[Nordic nRF52832 Datasheet (pdf)]
## Memory Mapped Peripherals
Interaction with these peripherals is simple at a first glance - write the right data to the correct address. For example, sending a 32 bit word over a serial port could be as direct as writing that 32 bit word to a certain memory address. The Serial Port Peripheral would then take over and send out the data automatically.
Configuration of these peripherals works similarly. Instead of calling a function to configure a peripheral, a chunk of memory is exposed which serves as the hardware API. Write `0x8000_0000` to a SPI Frequency Configuration Register, and the SPI port will send data at 8 Megabits per second. Write `0x0200_0000` to the same address, and the SPI port will send data at 125 Kilobits per second. These configuration registers look a little bit like this:
![](../assets/nrf52-spi-frequency-register.png)
[Nordic nRF52832 Datasheet (pdf)]
This interface is how interactions with the hardware are made, no matter what language is used, whether that language is Assembly, C, or Rust.
[Nordic nRF52832 Datasheet (pdf)]: http://infocenter.nordicsemi.com/pdf/nRF52832_PS_v1.1.pdf

View File

@ -1,17 +1,18 @@
# Singletons
# 单例
> In software engineering, the singleton pattern is a software design pattern that restricts the instantiation of a class to one object.
>在软件工程中,单例模式是一种软件设计模式,它限制类只有一个实例。
>
> *Wikipedia: [Singleton Pattern]*
> *维基百科:[单例模式] *
[Singleton Pattern]: https://en.wikipedia.org/wiki/Singleton_pattern
[单例模式]:https//en.wikipedia.org/wiki/Singleton_pattern
## But why can't we just use global variable(s)?
## 为什么我们不能直接使用全局变量?
We could make everything a public static, like this
我们可以像这样将所有内容设为公共静态
```rust,ignore
```rust , ignore
static mut THE_SERIAL_PORT: SerialPort = SerialPort;
fn main() {
@ -21,13 +22,14 @@ fn main() {
}
```
But this has a few problems. It is a mutable global variable, and in Rust, these are always unsafe to interact with. These variables are also visible across your whole program, which means the borrow checker is unable to help you track references and ownership of these variables.
## How do we do this in Rust?
但这有一些问题。它是一个可变的全局变量在Rust中与它们进行交互总是不安全的。这些变量在整个程序中也是可见的这意味着借用检查器无法帮助您跟踪这些变量的引用和所有权。
Instead of just making our peripheral a global variable, we might instead decide to make a global variable, in this case called `PERIPHERALS`, which contains an `Option<T>` for each of our peripherals.
## 我们如何在Rust中做到这一点
```rust,ignore
我们不是简单地将外设设为全局变量而是创建一个全局变量姑且称为“PERIPHERALS”其中每个外围设备都包含一个“Option <T>”。
```rust , ignore
struct Peripherals {
serial: Option<SerialPort>,
}
@ -42,9 +44,9 @@ static mut PERIPHERALS: Peripherals = Peripherals {
};
```
This structure allows us to obtain a single instance of our peripheral. If we try to call `take_serial()` more than once, our code will panic!
这种结构使我们可以获得外围设备的单个实例。如果我们尝试多次调用`take_serial()`,代码将会崩溃!
```rust,ignore
```rust , ignore
fn main() {
let serial_1 = unsafe { PERIPHERALS.take_serial() };
// This panics!
@ -52,15 +54,16 @@ fn main() {
}
```
Although interacting with this structure is `unsafe`, once we have the `SerialPort` it contained, we no longer need to use `unsafe`, or the `PERIPHERALS` structure at all.
尽管与此结构进行交互是`unsafe`但一旦取得了它内部的“SerialPort”我们将不再需要使用`unsafe`或`PERIPHERALS`结构体。
This has a small runtime overhead because we must wrap the `SerialPort` structure in an option, and we'll need to call `take_serial()` once, however this small up-front cost allows us to leverage the borrow checker throughout the rest of our program.
这具有很小的运行时开销,因为我们必须将`SerialPort`结构包装在一个Option中并且需要调用一次`take_serial()`,但是,这笔小小的前期成本使我们能够在其余所有过程中利用借用检查器检查我们的程序。
## Existing library support
## 现有库支持
Although we created our own `Peripherals` structure above, it is not necessary to do this for your code. the `cortex_m` crate contains a macro called `singleton!()` that will perform this action for you.
尽管我们在上面创建了自己的`Peripherals`结构体,但实际上你的代码中无需这么操作。 `cortex_m`crate包含一个名为`singleton!()`的宏,它将为您执行此操作。
```rust,ignore
```rust , ignore
#[macro_use(singleton)]
extern crate cortex_m;
@ -73,9 +76,9 @@ fn main() {
[cortex_m docs](https://docs.rs/cortex-m/latest/cortex_m/macro.singleton.html)
Additionally, if you use `cortex-m-rtfm`, the entire process of defining and obtaining these peripherals are abstracted for you, and you are instead handed a `Peripherals` structure that contains a non-`Option<T>` version of all of the items you define.
此外,如果您使用`cortex-m-rtfm` 定义和获取这些外围设备的整个过程已经帮您封装好了,您将获得一个`Peripherals`结构,该结构包含非`Option <T>`版本的您定义的所有项目。
```rust,ignore
```rust , ignore
// cortex-m-rtfm v0.3.x
app! {
resources: {
@ -91,11 +94,11 @@ fn init(p: init::Peripherals) -> init::LateResources {
[japaric.io rtfm v3](https://blog.japaric.io/rtfm-v3/)
## But why?
## 但为什么?
But how do these Singletons make a noticeable difference in how our Rust code works?
但是这些单例化能产生什么显著不同?
```rust,ignore
```rust , ignore
impl SerialPort {
const SER_PORT_SPEED_REG: *mut u32 = 0x4000_1000 as _;
@ -109,14 +112,15 @@ impl SerialPort {
}
```
There are two important factors in play here:
这里有两个重要因素:
* Because we are using a singleton, there is only one way or place to obtain a `SerialPort` structure
* To call the `read_speed()` method, we must have ownership or a reference to a `SerialPort` structure
* 因为我们使用的是单例,所以只有一种方法可以获得`SerialPort`实例
* 要调用`read_speed()`方法,我们必须对`SerialPort`实例拥有借用或者所有权
These two factors put together means that it is only possible to access the hardware if we have appropriately satisfied the borrow checker, meaning that at no point do we have multiple mutable references to the same hardware!
这两个因素放在一起,再加上只有满足借用检查器的情况下,才可以访问硬件,这意味着我们绝对不会对同一硬件有多个可变引用!
```rust,ignore
```rust , ignore
fn main() {
// missing reference to `self`! Won't work.
// SerialPort::read_speed();
@ -128,13 +132,13 @@ fn main() {
}
```
## Treat your hardware like data
## 将您的硬件视为数据
Additionally, because some references are mutable, and some are immutable, it becomes possible to see whether a function or method could potentially modify the state of the hardware. For example,
此外,由于某些引用是可变的,而有些则是不可变的,因此通过函数签名就可以判断是否可能潜在地修改硬件的状态。例如,
This is allowed to change hardware settings:
下面这个函数允许更改硬件设置:
```rust,ignore
```rust , ignore
fn setup_spi_port(
spi: &mut SpiPort,
cs_pin: &mut GpioPin
@ -143,12 +147,12 @@ fn setup_spi_port(
}
```
This isn't:
下面这个则不可以:
```rust,ignore
```rust , ignore
fn read_button(gpio: &GpioPin) -> bool {
// ...
}
```
This allows us to enforce whether code should or should not make changes to hardware at **compile time**, rather than at runtime. As a note, this generally only works across one application, but for bare metal systems, our software will be compiled into a single application, so this is not usually a restriction.
这使我们能够在编译时(而不是在运行时)限制代码是否应该更改硬件。需要注意的是,这通常仅适用于单个应用程序,但是对于裸机系统,我们的软件将被编译到单个应用程序中,因此不是问题。(这里说的是如果存在多个进程,它们可以分别构建单例,但是实际上外设只有一个,还是不安全)

View File

@ -0,0 +1,154 @@
# Singletons
> In software engineering, the singleton pattern is a software design pattern that restricts the instantiation of a class to one object.
>
> *Wikipedia: [Singleton Pattern]*
[Singleton Pattern]: https://en.wikipedia.org/wiki/Singleton_pattern
## But why can't we just use global variable(s)?
We could make everything a public static, like this
```rust , ignore
static mut THE_SERIAL_PORT: SerialPort = SerialPort;
fn main() {
let _ = unsafe {
THE_SERIAL_PORT.read_speed();
};
}
```
But this has a few problems. It is a mutable global variable, and in Rust, these are always unsafe to interact with. These variables are also visible across your whole program, which means the borrow checker is unable to help you track references and ownership of these variables.
## How do we do this in Rust?
Instead of just making our peripheral a global variable, we might instead decide to make a global variable, in this case called `PERIPHERALS`, which contains an `Option<T>` for each of our peripherals.
```rust , ignore
struct Peripherals {
serial: Option<SerialPort>,
}
impl Peripherals {
fn take_serial(&mut self) -> SerialPort {
let p = replace(&mut self.serial, None);
p.unwrap()
}
}
static mut PERIPHERALS: Peripherals = Peripherals {
serial: Some(SerialPort),
};
```
This structure allows us to obtain a single instance of our peripheral. If we try to call `take_serial()` more than once, our code will panic!
```rust , ignore
fn main() {
let serial_1 = unsafe { PERIPHERALS.take_serial() };
// This panics!
// let serial_2 = unsafe { PERIPHERALS.take_serial() };
}
```
Although interacting with this structure is `unsafe`, once we have the `SerialPort` it contained, we no longer need to use `unsafe`, or the structure at all.
This has a small runtime overhead because we must wrap the `SerialPort` structure in an option, and we'll need to call `take_serial()` once, however this small up-front cost allows us to leverage the borrow checker throughout the rest of our program.
## Existing library support
Although we created our own `Peripherals` structure above, it is not necessary to do this for your code. the `cortex_m` crate contains a macro called `singleton!()` that will perform this action for you.
```rust , ignore
#[macro_use(singleton)]
extern crate cortex_m;
fn main() {
// OK if `main` is executed only once
let x: &'static mut bool =
singleton!(: bool = false).unwrap();
}
```
[cortex_m docs](https://docs.rs/cortex-m/latest/cortex_m/macro.singleton.html)
Additionally, if you use `cortex-m-rtfm`, the entire process of defining and obtaining these peripherals are abstracted for you, and you are instead handed a `Peripherals` structure that contains a non-`Option<T>` version of all of the items you define.
```rust , ignore
// cortex-m-rtfm v0.3.x
app! {
resources: {
static RX: Rx<USART1>;
static TX: Tx<USART1>;
}
}
fn init(p: init::Peripherals) -> init::LateResources {
// Note that this is now an owned value, not a reference
let usart1: USART1 = p.device.USART1;
}
```
[japaric.io rtfm v3](https://blog.japaric.io/rtfm-v3/)
## But why?
But how do these Singletons make a noticeable difference in how our Rust code works?
```rust , ignore
impl SerialPort {
const SER_PORT_SPEED_REG: *mut u32 = 0x4000_1000 as _;
fn read_speed(
&self // <------ This is really, really important
) -> u32 {
unsafe {
ptr::read_volatile(Self::SER_PORT_SPEED_REG)
}
}
}
```
There are two important factors in play here:
* Because we are using a singleton, there is only one way or place to obtain a `SerialPort` structure
* To call the `read_speed()` method, we must have ownership or a reference to a `SerialPort` structure
These two factors put together means that it is only possible to access the hardware if we have appropriately satisfied the borrow checker, meaning that at no point do we have multiple mutable references to the same hardware!
```rust , ignore
fn main() {
// missing reference to `self`! Won't work.
// SerialPort::read_speed();
let serial_1 = unsafe { PERIPHERALS.take_serial() };
// you can only read what you have access to
let _ = serial_1.read_speed();
}
```
## Treat your hardware like data
Additionally, because some references are mutable, and some are immutable, it becomes possible to see whether a function or method could potentially modify the state of the hardware. For example,
This is allowed to change hardware settings:
```rust , ignore
fn setup_spi_port(
spi: &mut SpiPort,
cs_pin: &mut GpioPin
) -> Result<()> {
// ...
}
```
This isn't:
```rust , ignore
fn read_button(gpio: &GpioPin) -> bool {
// ...
}
```
This allows us to enforce whether code should or should not make changes to hardware at **compile time**, rather than at runtime. As a note, this generally only works across one application, but for bare metal systems, our software will be compiled into a single application, so this is not usually a restriction.

View File

@ -1,62 +1,65 @@
# Portability
# 可移植性
In embedded environments portability is a very important topic: Every vendor and even each family from a single manufacturer offers different peripherals and capabilities and similarly the ways to interact with the peripherals will vary.
在嵌入式环境中,可移植性是一个非常重要的主题:不同厂商,甚至同一厂商的不同家族的微控制器都提供不同的外围设备和功能,并且与这些外围设备进行交互的方式也会有所不同。
A common way to equalize such differences is via a layer called Hardware Abstraction layer or **HAL**.
填平这种差异的常用方法是通过硬件抽象层(**HAL**)。
> Hardware abstractions are sets of routines in software that emulate some platform-specific details, giving programs direct access to the hardware resources.
>硬件抽象层是一组例程,它们可以模拟某些特定平台的详细信息,从而使程序可以直接访问硬件资源。
>
> They often allow programmers to write device-independent, high performance applications by providing standard operating system (OS) calls to hardware.
>通过提供对硬件的标准操作系统(OS)调用,从而允许程序员编写与设备无关的高性能应用程序。
>
> *Wikipedia: [Hardware Abstraction Layer]*
> *维基百科:[硬件抽象层] *
[Hardware Abstraction Layer]: https://en.wikipedia.org/wiki/Hardware_abstraction
[硬件抽象层]:https://en.wikipedia.org/wiki/Hardware_abstraction
Embedded systems are a bit special in this regard since we typically do not have operating systems and user installable software but firmware images which are compiled as a whole as well as a number of other constraints. So while the traditional approach as defined by Wikipedia could potentially work it is likely not the most productive approach to ensure portability.
嵌入式系统在这方面有点特殊,因为他们通常没有操作系统,也不允许用户安装自己的软件. 并且固件映像是作为一个整体编译的,此外还有许多其他限制。因此尽管维基百科定义的传统方法可能可行,但它很可能不是确保可移植性的最有效的方法。
How do we do this in Rust? Enter **embedded-hal**...
我们如何在Rust中做到这一点那就是**embedded-hal** ...
## What is embedded-hal?
## 什么是Embedded-hal
In a nutshell it is a set of traits which define implementation contracts between **HAL implementations**, **drivers** and **applications (or firmwares)**. Those contracts include both capabilities (i.e. if a trait is implemented for a certain type, the **HAL implementation** provides a certain capability) and methods (i.e. if you can construct a type implementing a trait it is guaranteed that you have the methods specified in the trait available).
简而言之它是一组Trait它们定义了**HAL实现****驱动程序**和**应用程序**(或**固件**)之间的实现合约。这些合约包括功能(如果为某种类型实现了某种Trait**HAL实现**会提供某种能力)和方法(如果某种类型实现了某个Trait,HAL确保这个Trait指定的方法可用)。
A typical layering might look like this:
典型的分层可能如下所示:
![](../assets/rust_layers.svg)
Some of the defined traits in **embedded-hal** are:
* GPIO (input and output pins)
* Serial communication
**Embedded-hal**部分预定义的Trait有:
* GPIO(输入和输出引脚)
* 串行通讯
* I2C
* SPI
* Timers/Countdowns
* Analog Digital Conversion
* 计时器/倒数计数器
* 模拟数字转换
The main reason for having the **embedded-hal** traits and crates implementing and using them is to keep complexity in check. If you consider that an application might have to implement the use of the peripheral in the hardware as well as the application and potentially drivers for additional hardware components, then it should be easy to see that the re-usability is very limited. Expressed mathematically, if **M** is the number of peripheral HAL implementations and **N** the number of drivers then if we were to reinvent the wheel for every application then we would end up with **M*N** implementations while by using the *API* provided by the **embedded-hal** traits will make the implementation complexity approach **M+N**. Of course there're additional benefits to be had, such as less trial-and-error due to a well-defined and ready-to-use APIs.
使用**embedded-hal**的Trait和crate的主要原因是为了控制复杂性。如果某个应用程序自己必须独立实现外设的使用方法,独立编写应用程序以及潜在的其硬件驱动程序,那么应该很容易看出其代码可重用性非常有限。如果**M**是外设HAL实现的数量而**N**是驱动程序的数量,那么如果我们要为每个应用重新发明轮子,那么最终将得到**M\*N**种实现. 而使用基于**Embedded-hal**提供的Trait的API来实现则只需**M+N**种实现。当然还有其他好处例如由于定义明确且易于使用的API减少了反复试验。
## Users of the embedded-hal
## embedded-hal的使用者
As said above there are three main users of the HAL:
如上所述HAL主要有三个使用者
### HAL implementation
### HAL实现
A HAL implementation provides the interfacing between the hardware and the users of the HAL traits. Typical implementations consist of three parts:
* One or more hardware specific types
* Functions to create and initialize such a type, often providing various configuration options (speed, operation mode, use pins, etc.)
* one or more `trait` `impl` of **embedded-hal** traits for that type
HAL实现提供了硬件与HAL trait的用户之间的接口。典型的实现包括三个部分
* 一种或多种硬件相关的数据类型
* 创建和初始化这种类型的函数,通常提供各种配置选项(速度,操作模式,引脚等)
* 为该类型实现**embedded-hal**定义的一个或者多个trait
Such a **HAL implementation** can come in various flavours:
* Via low-level hardware access, e.g. via registers
* Via operating system, e.g. by using the `sysfs` under Linux
* Via adapter, e.g. a mock of types for unit testing
* Via driver for hardware adapters, e.g. I2C multiplexer or GPIO expander
这样的**HAL实现**可以有多种形式:
### Driver
* 通过低级别的硬件访问,例如通过寄存器
* 通过操作系统例如在Linux下使用`sysfs`
* 通过适配器,例如模拟单元测试的类型
* 通过硬件适配器的驱动程序例如I2C多路复用器或GPIO扩展器
A driver implements a set of custom functionality for an internal or external component, connected to a peripheral implementing the embedded-hal traits. Typical examples for such drivers include various sensors (temperature, magnetometer, accelerometer, light), display devices (LED arrays, LCD displays) and actuators (motors, transmitters).
### 驱动
驱动程序为内部或外部组件实现了一组自定义功能这些组件连接到实现了嵌入式hal trait的外围设备。这种驱动程序的典型示例包括各种传感器(温度,磁力计,加速度计,光线),显示设备(LED阵列LCD显示屏)和驱动器(电机,发射器)。
一个驱动程序必须用一个类型实例来初始化该类型实现了Embedded-hal的某个“trait”这是通过特征绑定来确保的并为其自身的类型实例提供一组自定义方法以允许与被驱动设备进行交互。
A driver has to be initialized with an instance of type that implements a certain `trait` of the embedded-hal which is ensured via trait bound and provides its own type instance with a custom set of methods allowing to interact with the driven device.
### Application
### 应用
The application binds the various parts together and ensures that the desired functionality is achieved. When porting between different systems, this is the part which requires the most adaptation efforts, since the application needs to correctly initialize the real hardware via the HAL implementation and the initialisation of different hardware differs, sometimes drastically so. Also the user choice often plays a big role, since components can be physically connected to different terminals, hardware buses sometimes need external hardware to match the configuration or there are different trade-offs to be made in the use of internal peripherals (e.g. multiple timers with different capabilities are available or peripherals conflict with others).
该应用程序将各个部分绑定在一起并确保实现所需的功能。在不同系统之间进行移植时这是需要花费大量精力的部分因为应用程序需要通过HAL实现正确地初始化实际硬件并且不同硬件的初始化有时甚至完全不同。另外用户的选择通常也起着很大的作用因为组件可以连接到不同的终端有时硬件总线需要外部硬件来匹配配置或者在使用内部外设时需要进行不同的权衡(例如,多个具有不同功能的定时器或外设之间互相冲突)。

View File

@ -0,0 +1,62 @@
# Portability
In embedded environments portability is a very important topic: Every vendor and even each family from a single manufacturer offers different peripherals and capabilities and similarly the ways to interact with the peripherals will vary.
A common way to equalize such differences is via a layer called Hardware Abstraction layer or **HAL**.
> Hardware abstractions are sets of routines in software that emulate some platform-specific details, giving programs direct access to the hardware resources.
>
> They often allow programmers to write device-independent, high performance applications by providing standard operating system (OS) calls to hardware.
>
> *Wikipedia: [Hardware Abstraction Layer]*
[Hardware Abstraction Layer]: https://en.wikipedia.org/wiki/Hardware_abstraction
Embedded systems are a bit special in this regard since we typically do not have operating systems and user installable software but firmware images which are compiled as a whole as well as a number of other constraints. So while the traditional approach as defined by Wikipedia could potentially work it is likely not the most productive approach to ensure portability.
How do we do this in Rust? Enter **embedded-hal**...
## What is embedded-hal?
In a nutshell it is a set of traits which define implementation contracts between **HAL implementations**, **drivers** and **applications (or firmwares)**. Those contracts include both capabilities (i.e. if a trait is implemented for a certain type, the **HAL implementation** provides a certain capability) and methods (i.e. if you can construct a type implementing a trait it is guaranteed that you have the methods specified in the trait available).
A typical layering might look like this:
![](../assets/rust_layers.svg)
Some of the defined traits in **embedded-hal** are:
* GPIO (input and output pins)
* Serial communication
* I2C
* SPI
* Timers/Countdowns
* Analog Digital Conversion
The main reason for having the **embedded-hal** traits and crates implementing and using them is to keep complexity in check. If you consider that an application might have to implement the use of the peripheral in the hardware as well as the application and potentially drivers for additional hardware components, then it should be easy to see that the re-usability is very limited. Expressed mathematically, if **M** is the number of peripheral HAL implementations and **N** the number of drivers then if we were to reinvent the wheel for every application then we would end up with **M*N** implementations while by using the *API* provided by the **embedded-hal** traits will make the implementation complexity approach **M+N**. Of course there're additional benefits to be had, such as less trial-and-error due to a well-defined and ready-to-use APIs.
## Users of the embedded-hal
As said above there are three main users of the HAL:
### HAL implementation
A HAL implementation provides the interfacing between the hardware and the users of the HAL traits. Typical implementations consist of three parts:
* One or more hardware specific types
* Functions to create and initialize such a type, often providing various configuration options (speed, operation mode, use pins, etc.)
* one or more `trait` `impl` of **embedded-hal** traits for that type
Such a **HAL implementation** can come in various flavours:
* Via low-level hardware access, e.g. via registers
* Via operating system, e.g. by using the `sysfs` under Linux
* Via adapter, e.g. a mock of types for unit testing
* Via driver for hardware adapters, e.g. I2C multiplexer or GPIO expander
### Driver
A driver implements a set of custom functionality for an internal or external component, connected to a peripheral implementing the embedded-hal traits. Typical examples for such drivers include various sensors (temperature, magnetometer, accelerometer, light), display devices (LED arrays, LCD displays) and actuators (motors, transmitters).
A driver has to be initialized with an instance of type that implements a certain `trait` of the embedded-hal which is ensured via trait bound and provides its own type instance with a custom set of methods allowing to interact with the driven device.
### Application
The application binds the various parts together and ensures that the desired functionality is achieved. When porting between different systems, this is the part which requires the most adaptation efforts, since the application needs to correctly initialize the real hardware via the HAL implementation and the initialisation of different hardware differs, sometimes drastically so. Also the user choice often plays a big role, since components can be physically connected to different terminals, hardware buses sometimes need external hardware to match the configuration or there are different trade-offs to be made in the use of internal peripherals (e.g. multiple timers with different capabilities are available or peripherals conflict with others).

View File

@ -1,16 +1,12 @@
# Exceptions
# 异常
Exceptions, and interrupts, are a hardware mechanism by which the processor
handles asynchronous events and fatal errors (e.g. executing an invalid
instruction). Exceptions imply preemption and involve exception handlers,
subroutines executed in response to the signal that triggered the event.
异常和中断是一种硬件机制,处理器通过该机制处理异步事件和致命错误(例如执行无效指令)。异常意味着抢占,也包括异常处理程序,发生异常时,这些子程序会被立即执行,以响应触发异常的信号。
The `cortex-m-rt` crate provides an [`exception`] attribute to declare exception
handlers.
cortex-m-rt crate提供了一个[`exception`]属性来声明异常处理程序。
[`exception`]: https://docs.rs/cortex-m-rt-macros/latest/cortex_m_rt_macros/attr.exception.html
[`exception`]:https://docs.rs/cortex-m-rt-macros/latest/cortex_m_rt_macros/attr.exception.html
``` rust,ignore
``` rust , ignore
// Exception handler for the SysTick (System Timer) exception
#[exception]
fn SysTick() {
@ -18,15 +14,11 @@ fn SysTick() {
}
```
Other than the `exception` attribute exception handlers look like plain
functions but there's one more difference: `exception` handlers can *not* be
called by software. Following the previous example, the statement `SysTick();`
would result in a compilation error.
除了`exception` 属性之外,异常处理程序看起来像普通函数,但还有另外一个区别:`exception` 处理程序不能被软件调用。上面的示例中,语句`SysTick();`将导致编译错误。
This behavior is pretty much intended and it's required to provide a feature:
`static mut` variables declared *inside* `exception` handlers are *safe* to use.
这种行为是有意为之,`exception`属性还让在异常处理程序中声明`static mut`变量是安全的。
``` rust,ignore
``` rust , ignore
#[exception]
fn SysTick() {
static mut COUNT: u32 = 0;
@ -36,28 +28,17 @@ fn SysTick() {
}
```
As you may know, using `static mut` variables in a function makes it
[*non-reentrant*](https://en.wikipedia.org/wiki/Reentrancy_(computing)). It's undefined behavior to call a non-reentrant function,
directly or indirectly, from more than one exception / interrupt handler or from
`main` and one or more exception / interrupt handlers.
如您所知,在函数中使用`static mut`变量使其成为[不可重入](https://en.wikipedia.org/wiki/Reentrancy_(computing))。 从多个异常/中断处理程序或`main`中直接或间接调用不可重入函数是不确定的行为。
Safe Rust must never result in undefined behavior so non-reentrant functions
must be marked as `unsafe`. Yet I just told that `exception` handlers can safely
use `static mut` variables. How is this possible? This is possible because
`exception` handlers can *not* be called by software thus reentrancy is not
possible.
Safe Rust绝不能导致不确定的行为因此非可重入函数必须标记为 `unsafe`。但是我刚刚却说异常处理程序可以安全地使用`static mut`变量。这怎么可能?这是可能的,因为异常处理程序不能被函数调用,因此无法重入。
## A complete example
## 一个完整的例子
Here's an example that uses the system timer to raise a `SysTick` exception
roughly every second. The `SysTick` exception handler keeps track of how many
times it has been called in the `COUNT` variable and then prints the value of
`COUNT` to the host console using semihosting.
这是一个使用系统计时器每秒引发一次`SysTick` 异常的示例。 SysTick异常处理程序通过COUNT变量跟踪自己被调用了多少次然后使用半主机将COUNT的值打印到主机控制台。
> **NOTE**: You can run this example on any Cortex-M device; you can also run it
> on QEMU
> **注意**您可以在任何Cortex-M设备上运行此示例您也可以在QEMU上运行它
```rust,ignore
```rust , ignore
#![deny(unsafe_code)]
#![no_main]
#![no_std]
@ -114,6 +95,7 @@ fn SysTick() {
}
```
``` console
$ tail -n5 Cargo.toml
```
@ -132,61 +114,43 @@ $ cargo run --release
123456789
```
If you run this on the Discovery board you'll see the output on the OpenOCD
console. Also, the program will *not* stop when the count reaches 9.
如果在开发板上运行此命令则会在OpenOCD控制台上看到输出。但是当计数达到9时程序将**不**停止。
## The default exception handler
## 默认异常处理程序
What the `exception` attribute actually does is *override* the default exception
handler for a specific exception. If you don't override the handler for a
particular exception it will be handled by the `DefaultHandler` function, which
defaults to:
`exception`属性的实际作用是**覆盖**特定异常的默认异常处理程序。如果您不重写特定异常的处理程序,它将由`DefaultHandler`函数处理,该函数默认为:
``` rust,ignore
``` rust , ignore
fn DefaultHandler() {
loop {}
}
```
This function is provided by the `cortex-m-rt` crate and marked as
`#[no_mangle]` so you can put a breakpoint on "DefaultHandler" and catch
*unhandled* exceptions.
此功能由`cortex-m-rt`crate提供并标记为 `#[no_mangle]`,因此您可以在“ DefaultHandler”上放置断点并捕获**未处理**异常。
It's possible to override this `DefaultHandler` using the `exception` attribute:
可以使用`exception`属性覆盖这个`DefaultHandler`
``` rust,ignore
``` rust , ignore
#[exception]
fn DefaultHandler(irqn: i16) {
// custom default handler
}
```
The `irqn` argument indicates which exception is being serviced. A negative
value indicates that a Cortex-M exception is being serviced; and zero or a
positive value indicate that a device specific exception, AKA interrupt, is
being serviced.
irqn是正在处理的异常编号。负值表示Cortex-M异常零或正值表示设备特定的异常即AKA中断。
## The hard fault handler
## 硬故障处理程序
The `HardFault` exception is a bit special. This exception is fired when the
program enters an invalid state so its handler can *not* return as that could
result in undefined behavior. Also, the runtime crate does a bit of work before
the user defined `HardFault` handler is invoked to improve debuggability.
`HardFault`异常有点特殊。当程序进入无效状态时,将引发此异常,因此它的处理程序不能返回,因为这可能导致未定义的行为。另外,在调用用户定义的`HardFault`前运行时crate会做一些工作以提高程序的可调试性。
The result is that the `HardFault` handler must have the following signature:
`fn(&ExceptionFrame) -> !`. The argument of the handler is a pointer to
registers that were pushed into the stack by the exception. These registers are
a snapshot of the processor state at the moment the exception was triggered and
are useful to diagnose a hard fault.
所以`HardFault`处理函数必须具有以下签名:`fn(ExceptionFrame)->`。处理程序的参数是指向被异常压入堆栈的寄存器的指针。这些寄存器是异常触发时处理器状态的快照,可用于诊断故障。
Here's an example that performs an illegal operation: a read to a nonexistent
memory location.
这是一个执行非法操作的示例:读取不存在的内存位置。
> **NOTE**: This program won't work, i.e. it won't crash, on QEMU because
> `qemu-system-arm -machine lm3s6965evb` doesn't check memory loads and will
> happily return `0 `on reads to invalid memory.
> **注意**该程序在QEMU上不起作用即不会崩溃因为`qemu-system-arm -machine lm3s6965evb`不会检查内存读取,并且在读取到无效内存时会很高兴地返回`0`。
```rust,ignore
```rust , ignore
#![no_main]
#![no_std]
@ -218,8 +182,7 @@ fn HardFault(ef: &ExceptionFrame) -> ! {
}
```
The `HardFault` handler prints the `ExceptionFrame` value. If you run this
you'll see something like this on the OpenOCD console.
`HardFault`处理程序将打印`ExceptionFrame`值。如果运行此程序您将在OpenOCD控制台上看到类似的内容。
``` console
$ openocd
@ -236,10 +199,9 @@ ExceptionFrame {
}
```
The `pc` value is the value of the Program Counter at the time of the exception
and it points to the instruction that triggered the exception.
`pc` 值是发生异常时程序计数器的值,它指向触发异常的指令。
If you look at the disassembly of the program:
如果您查看程序的反汇编:
``` console
@ -252,7 +214,4 @@ ResetTrampoline:
800094c: b #-0x4 <ResetTrampoline+0xa>
```
You can lookup the value of the program counter `0x0800094a` in the dissassembly.
You'll see that a load operation (`ldr r0, [r0]` ) caused the exception.
The `r0` field of `ExceptionFrame` will tell you the value of register `r0`
was `0x3fff_fffe` at that time.
您可以在反汇编中查找程序计数器`0x0800094a` 的值。您将看到加载操作(`ldr r0[r0]`)引起了异常。 `ExceptionFrame`的`r0`字段将告诉您寄存器r0的值为当时的0x3fff_fffe。

215
src/start/exceptions_en.md Normal file
View File

@ -0,0 +1,215 @@
# Exceptions
Exceptions, and interrupts, are a hardware mechanism by which the processor handles asynchronous events and fatal errors (e.g. executing an invalid instruction). Exceptions imply preemption and involve exception handlers, subroutines executed in response to the signal that triggered the event.
The `cortex-m-rt` crate provides an [`exception`] attribute to declare exception handlers.
[`exception`]: https://docs.rs/cortex-m-rt-macros/latest/cortex_m_rt_macros/attr.exception.html
``` rust , ignore
// Exception handler for the SysTick (System Timer) exception
#[exception]
fn SysTick() {
// ..
}
```
Other than the `exception` attribute exception handlers look like plain functions but there's one more difference: `exception` handlers can *not* be called by software. Following the previous example, the statement `SysTick();` would result in a compilation error.
This behavior is pretty much intended and it's required to provide a feature `static mut` variables declared *inside* `exception` handlers are *safe* to use.
``` rust , ignore
#[exception]
fn SysTick() {
static mut COUNT: u32 = 0;
// `COUNT` has type `&mut u32` and it's safe to use
*COUNT += 1;
}
```
As you may know, using `static mut` variables in a function makes it [*non-reentrant*](https://en.wikipedia.org/wiki/Reentrancy_(computing)). It's undefined behavior to call a non-reentrant function, directly or indirectly, from more than one exception / interrupt handler or from `main` and one or more exception / interrupt handlers.
Safe Rust must never result in undefined behavior so non-reentrant functions must be marked as `unsafe`. Yet I just told that `exception` handlers can safely use `static mut` variables. How is this possible? This is possible because `exception` handlers can *not* be called by software thus reentrancy is not possible.
## A complete example
Here's an example that uses the system timer to raise a `SysTick` exception roughly every second. The `SysTick` exception handler keeps track of how many times it has been called in the `COUNT` variable and then prints the value of `COUNT` to the host console using semihosting.
> **NOTE**: You can run this example on any Cortex-M device; you can also run it on QEMU
```rust , ignore
#![deny(unsafe_code)]
#![no_main]
#![no_std]
extern crate panic_halt;
use core::fmt::Write;
use cortex_m::peripheral::syst::SystClkSource;
use cortex_m_rt::{entry, exception};
use cortex_m_semihosting::{
debug,
hio::{self, HStdout},
};
#[entry]
fn main() -> ! {
let p = cortex_m::Peripherals::take().unwrap();
let mut syst = p.SYST;
// configures the system timer to trigger a SysTick exception every second
syst.set_clock_source(SystClkSource::Core);
// this is configured for the LM3S6965 which has a default CPU clock of 12 MHz
syst.set_reload(12_000_000);
syst.clear_current();
syst.enable_counter();
syst.enable_interrupt();
loop {}
}
#[exception]
fn SysTick() {
static mut COUNT: u32 = 0;
static mut STDOUT: Option<HStdout> = None;
*COUNT += 1;
// Lazy initialization
if STDOUT.is_none() {
*STDOUT = hio::hstdout().ok();
}
if let Some(hstdout) = STDOUT.as_mut() {
write!(hstdout, "{}", *COUNT).ok();
}
// IMPORTANT omit this `if` block if running on real hardware or your
// debugger will end in an inconsistent state
if *COUNT == 9 {
// This will terminate the QEMU process
debug::exit(debug::EXIT_SUCCESS);
}
}
```
``` console
$ tail -n5 Cargo.toml
```
``` toml
[dependencies]
cortex-m = "0.5.7"
cortex-m-rt = "0.6.3"
panic-halt = "0.2.0"
cortex-m-semihosting = "0.3.1"
```
``` console
$ cargo run --release
Running `qemu-system-arm -cpu cortex-m3 -machine lm3s6965evb (..)
123456789
```
If you run this on the Discovery board you'll see the output on the OpenOCD console. Also, the program will *not* stop when the count reaches 9.
## The default exception handler
What the `exception` attribute actually does is *override* the default exception handler for a specific exception. If you don't override the handler for a particular exception it will be handled by the `DefaultHandler` function, which defaults to:
``` rust , ignore
fn DefaultHandler() {
loop {}
}
```
This function is provided by the `cortex-m-rt` crate and marked as `#[no_mangle]` so you can put a breakpoint on "DefaultHandler" and catch *unhandled* exceptions.
It's possible to override this `DefaultHandler` using the `exception` attribute:
``` rust , ignore
#[exception]
fn DefaultHandler(irqn: i16) {
// custom default handler
}
```
The `irqn` argument indicates which exception is being serviced. A negative value indicates that a Cortex-M exception is being serviced; and zero or a positive value indicate that a device specific exception, AKA interrupt, is being serviced.
## The hard fault handler
The `HardFault` exception is a bit special. This exception is fired when the program enters an invalid state so its handler can *not* return as that could result in undefined behavior. Also, the runtime crate does a bit of work before the user defined `HardFault` handler is invoked to improve debuggability.
The result is that the `HardFault` handler must have the following signature: `fn(&ExceptionFrame) -> !`. The argument of the handler is a pointer to registers that were pushed into the stack by the exception. These registers are a snapshot of the processor state at the moment the exception was triggered and are useful to diagnose a hard fault.
Here's an example that performs an illegal operation: a read to a nonexistent memory location.
> **NOTE**: This program won't work, i.e. it won't crash, on QEMU because `qemu-system-arm -machine lm3s6965evb` doesn't check memory loads and will happily return `0 `on reads to invalid memory.
```rust , ignore
#![no_main]
#![no_std]
extern crate panic_halt;
use core::fmt::Write;
use core::ptr;
use cortex_m_rt::{entry, exception, ExceptionFrame};
use cortex_m_semihosting::hio;
#[entry]
fn main() -> ! {
// read a nonexistent memory location
unsafe {
ptr::read_volatile(0x3FFF_FFFE as *const u32);
}
loop {}
}
#[exception]
fn HardFault(ef: &ExceptionFrame) -> ! {
if let Ok(mut hstdout) = hio::hstdout() {
writeln!(hstdout, "{:#?}", ef).ok();
}
loop {}
}
```
The `HardFault` handler prints the `ExceptionFrame` value. If you run this you'll see something like this on the OpenOCD console.
``` console
$ openocd
(..)
ExceptionFrame {
r0: 0x3ffffffe,
r1: 0x00f00000,
r2: 0x20000000,
r3: 0x00000000,
r12: 0x00000000,
lr: 0x080008f7,
pc: 0x0800094a,
xpsr: 0x61000000
}
```
The `pc` value is the value of the Program Counter at the time of the exception and it points to the instruction that triggered the exception.
If you look at the disassembly of the program:
``` console
$ cargo objdump --bin app --release -- -d -no-show-raw-insn -print-imm-hex
(..)
ResetTrampoline:
8000942: movw r0, #0xfffe
8000946: movt r0, #0x3fff
800094a: ldr r0, [r0]
800094c: b #-0x4 <ResetTrampoline+0xa>
```
You can lookup the value of the program counter `0x0800094a` in the dissassembly. You'll see that a load operation (`ldr r0, [r0]` ) caused the exception. The `r0` field of `ExceptionFrame` will tell you the value of register `r0` was `0x3fff_fffe` at that time.

View File

@ -1,44 +1,34 @@
# Hardware
# 硬件
By now you should be somewhat familiar with the tooling and the development
process. In this section we'll switch to real hardware; the process will remain
largely the same. Let's dive in.
现在,您应该对工具和开发过程有所了解。在本节中,我们将切换到实际硬件,该过程将基本保持不变,让我们开始吧。
## Know your hardware
## 了解您的硬件
Before we begin you need to identify some characteristics of the target device
as these will be used to configure the project:
在我们开始之前,您需要确定目标设备的一些特征,因为这些特征将用于配置项目:
- The ARM core. e.g. Cortex-M3.
- ARM内核。例如Cortex-M3。
- Does the ARM core include an FPU? Cortex-M4**F** and Cortex-M7**F** cores do.
- ARM内核是否包括FPU Cortex-M4**F**和Cortex-M7**F**内核都有FPU。
- How much Flash memory and RAM does the target device have? e.g. 256 KiB of
Flash and 32 KiB of RAM.
-目标设备有多少闪存和RAM例如256 KiB的闪存和32 KiB的RAM。
- Where are Flash memory and RAM mapped in the address space? e.g. RAM is
commonly located at address `0x2000_0000`.
-闪存和RAM映射的地址空间在哪里例如RAM是通常位于地址“0x2000_0000”。
You can find this information in the data sheet or the reference manual of your
device.
通常您可以在数据手册或设备的参考手册中找到这些信息。
In this section we'll be using our reference hardware, the STM32F3DISCOVERY.
This board contains an STM32F303VCT6 microcontroller. This microcontroller has:
在本节中我们将使用我们的参考硬件STM32F3DISCOVERY。该开发板包含STM32F303VCT6微控制器。该微控制器具有
- A Cortex-M4F core that includes a single precision FPU
- 一个Cortex-M4F内核其中包括一个单精度FPU
- 256 KiB of Flash located at address 0x0800_0000.
- 闪存的256 KiB位于地址0x0800_0000。
- 40 KiB of RAM located at address 0x2000_0000. (There's another RAM region but
for simplicity we'll ignore it).
- 位于地址0x2000_0000的40KiBRAM。 (还有另一个RAM区域为简单起见我们将其忽略)。
## Configuring
## 配置
We'll start from scratch with a fresh template instance. Refer to the
[previous section on QEMU] for a refresher on how to do this without
`cargo-generate`.
我们将从一个新的模板实例开始。如果没有`cargo-generate`工具,请参阅[上一小节的QEMU]。
[previous section on QEMU]: qemu.md
[上一小节的QEMU]:qemu.md
``` console
$ cargo generate --git https://github.com/rust-embedded/cortex-m-quickstart
@ -49,7 +39,7 @@ $ cargo generate --git https://github.com/rust-embedded/cortex-m-quickstart
$ cd app
```
Step number one is to set a default compilation target in `.cargo/config`.
第一个步是在.cargo/config中设置默认的编译目标。
``` console
$ tail -n5 .cargo/config
@ -63,10 +53,10 @@ $ tail -n5 .cargo/config
target = "thumbv7em-none-eabihf" # Cortex-M4F and Cortex-M7F (with FPU)
```
We'll use `thumbv7em-none-eabihf` as that covers the Cortex-M4F core.
The second step is to enter the memory region information into the `memory.x`
file.
我们将使用`thumbv7em-none-eabihf`因为它适合Cortex-M4F内核。
第二步是将存储区域信息输入到“memory.x”文件中。
``` console
$ cat memory.x
@ -79,10 +69,9 @@ MEMORY
}
```
Make sure the `debug::exit()` call is commented out or removed, it is used
only for running in QEMU.
确保`debug::exit()`调用已被注释掉或删除因为他仅用于在QEMU中运行。
```rust,ignore
```rust , ignore
#[entry]
fn main() -> ! {
hprintln!("Hello, world!").unwrap();
@ -95,34 +84,24 @@ fn main() -> ! {
}
```
You can now cross compile programs using `cargo build`
and inspect the binaries using `cargo-binutils` as you did before. The
`cortex-m-rt` crate handles all the magic required to get your chip running,
as helpfully, pretty much all Cortex-M CPUs boot in the same fashion.
现在,您可以像以前一样使用`cargo build`交叉编译程序,并使用`cargo-binutils`检查二进制文件。 `cortex-m-rt` crate可处理使您的芯片运行所需的所有魔术,几乎所有Cortex-M CPU都以相同的方式引导。
``` console
$ cargo build --example hello
```
## Debugging
## 调试
Debugging will look a bit different. In fact, the first steps can look different
depending on the target device. In this section we'll show the steps required to
debug a program running on the STM32F3DISCOVERY. This is meant to serve as a
reference; for device specific information about debugging check out [the
Debugonomicon](https://github.com/rust-embedded/debugonomicon).
调试看起来会有所不同。实际上根据目标设备的不同第一步看起来可能会有所不同。在本节中我们将介绍在STM32F3DISCOVERY上调试程序所需的步骤。有关设备的特定信息请查看[Debugonomicon](https://github.com/rust-embedded/debugonomicon)。
As before we'll do remote debugging and the client will be a GDB process. This
time, however, the server will be OpenOCD.
和以前一样我们将进行远程调试客户端是GDB进程,服务器将是OpenOCD。
As done during the [verify] section connect the discovery board to your laptop /
PC and check that the ST-LINK header is populated.
$ cat openocd.cfg
按照[验证]部分的操作将开发板连接到笔记本电脑或者PC并检查是否填充了ST-LINK接头连接器(todo ... check that the ST-LINK header is populated)。
[verify]: ../intro/install/verify.md
[验证]: ../intro/install/verify.md
On a terminal run `openocd` to connect to the ST-LINK on the discovery board.
Run this command from the root of the template; `openocd` will pick up the
`openocd.cfg` file which indicates which interface file and target file to use.
在终端上运行“openocd”以连接到开发板上的ST-LINK。从模板的根目录运行此命令`openocd`会根据`openocd.cfg`文件,找到要使用的接口文件和目标文件。
``` console
$ cat openocd.cfg
@ -143,9 +122,7 @@ source [find interface/stlink-v2-1.cfg]
source [find target/stm32f3x.cfg]
```
> **NOTE** If you found out that you have an older revision of the discovery
> board during the [verify] section then you should modify the `openocd.cfg`
> file at this point to use `interface/stlink-v2.cfg`.
> **注意**如果您在[验证]部分发现开发板的版本较旧,则此时应修改`openocd.cfg`文件以使用`interface/stlink-v2.cfg`。
``` console
$ openocd
@ -167,13 +144,13 @@ Info : Target voltage: 2.913879
Info : stm32f3x.cpu: hardware has 6 breakpoints, 4 watchpoints
```
On another terminal run GDB, also from the root of the template.
在另一个终端上也从模板的根目录运行GDB。
``` console
$ <gdb> -q target/thumbv7em-none-eabihf/debug/examples/hello
```
Next connect GDB to OpenOCD, which is waiting for a TCP connection on port 3333.
接下来将GDB连接到OpenOCDOpenOCD正在监听端口3333,等待新的TCP连接。
``` console
(gdb) target remote :3333
@ -181,8 +158,7 @@ Remote debugging using :3333
0x00000000 in ?? ()
```
Now proceed to *flash* (load) the program onto the microcontroller using the
`load` command.
现在,使用`load`命令将程序加载到微控制器上。
``` console
(gdb) load
@ -193,19 +169,16 @@ Start address 0x800144e, load size 10380
Transfer rate: 17 KB/sec, 3460 bytes/write.
```
The program is now loaded. This program uses semihosting so before we do any
semihosting call we have to tell OpenOCD to enable semihosting. You can send
commands to OpenOCD using the `monitor` command.
现在程序已加载。该程序使用半主机因此在进行任何半主机调用之前我们必须告诉OpenOCD启用半主机。您可以使用“ monitor”将命令发送到OpenOCD。
``` console
(gdb) monitor arm semihosting enable
semihosting is enabled
```
> You can see all the OpenOCD commands by invoking the `monitor help` command.
>您可以通过调用`monitor help`命令来查看所有OpenOCD命令。
Like before we can skip all the way to `main` using a breakpoint and the
`continue` command.
像之前一样,我们可以使用断点和`continue`跳过所有跳转到`main`函数。
``` console
(gdb) break main
@ -219,12 +192,10 @@ Breakpoint 1, main () at examples/hello.rs:15
15 let mut stdout = hio::hstdout().unwrap();
```
> **NOTE** If GDB blocks the terminal instead of hitting the breakpoint after
> you issue the `continue` command above, you might want to double check that
> the memory region information in the `memory.x` file is correctly set up
> for your device (both the starts *and* lengths).
> **注意**如果执行`continue`命令后GDB阻塞了终端而不是停在了断点上则可能需要仔细检查`memory.x`文件中的内存区域信息是否配置正确(起始地址和长度)。
用`next`命令替代刚刚的`continue`,应该也会产生相同的结果。
Advancing the program with `next` should produce the same results as before.
``` console
(gdb) next
@ -234,8 +205,8 @@ Advancing the program with `next` should produce the same results as before.
19 debug::exit(debug::EXIT_SUCCESS);
```
At this point you should see "Hello, world!" printed on the OpenOCD console,
among other stuff.
此时,您应该看到"Hello, world!" 打印在OpenOCD控制台上等等。
``` console
$ openocd
@ -251,8 +222,7 @@ Info : halted: PC: 0x08000d70
Info : halted: PC: 0x08000d72
```
Issuing another `next` will make the processor execute `debug::exit`. This acts
as a breakpoint and halts the process:
发出另一个`next`将使处理器执行`debug::exit`。这充当断点并中止该过程:
``` console
(gdb) next
@ -261,7 +231,7 @@ Program received signal SIGTRAP, Trace/breakpoint trap.
0x0800141a in __syscall ()
```
It also causes this to be printed to the OpenOCD console:
OpenOCD控制台将会打印如下内容
``` console
$ openocd
@ -274,17 +244,16 @@ target halted due to breakpoint, current mode: Thread
xPSR: 0x21000000 pc: 0x08000d76 msp: 0x20009fc0, semihosting
```
However, the process running on the microcontroller has not terminated and you
can resume it using `continue` or a similar command.
但是,在微控制器上运行的进程尚未终止,您可以使用`continue`或类似命令将其恢复。
You can now exit GDB using the `quit` command.
现在,您可以使用“ quit”命令退出GDB。
``` console
(gdb) quit
```
Debugging now requires a few more steps so we have packed all those steps into a
single GDB script named `openocd.gdb`.
现在调试需要更多步骤,因此我们将所有这些步骤打包到一个名为`openocd.gdb`的GDB脚本中。
``` console
$ cat openocd.gdb
@ -309,17 +278,15 @@ load
stepi
```
Now running `<gdb> -x openocd.gdb $program` will immediately connect GDB to
OpenOCD, enable semihosting, load the program and start the process.
现在运行 `<gdb> -x openocd.gdb $program`将立即将GDB连接到OpenOCD启用半主机加载程序并启动该过程。
Alternatively, you can turn `<gdb> -x openocd.gdb` into a custom runner to make
`cargo run` build a program *and* start a GDB session. This runner is included
in `.cargo/config` but it's commented out.
您也可以将`<gdb> -x openocd.gdb`转换为自定义运行器,这样`cargo run`会自动构建程序并开始GDB会话。该运行器已包含在`.cargo/config`中,只不过现在是被注释掉的状态。
``` console
$ head -n10 .cargo/config
```
``` toml
[target.thumbv7m-none-eabi]
# uncomment this to make `cargo run` execute programs on QEMU
@ -342,4 +309,4 @@ Loading section .rodata, size 0x61c lma 0x8002270
Start address 0x800144e, load size 10380
Transfer rate: 17 KB/sec, 3460 bytes/write.
(gdb)
```
```

309
src/start/hardware_en.md Normal file
View File

@ -0,0 +1,309 @@
# Hardware
By now you should be somewhat familiar with the tooling and the development process. In this section we'll switch to real hardware; the process will remain largely the same. Let's dive in.
## Know your hardware
Before we begin you need to identify some characteristics of the target device as these will be used to configure the project:
- The ARM core. e.g. Cortex-M3.
- Does the ARM core include an FPU? Cortex-M4**F** and Cortex-M7**F** cores do.
- How much Flash memory and RAM does the target device have? e.g. 256 KiB of
Flash and 32 KiB of RAM.
- Where are Flash memory and RAM mapped in the address space? e.g. RAM is
commonly located at address `0x2000_0000`.
You can find this information in the data sheet or the reference manual of your device.
In this section we'll be using our reference hardware, the STM32F3DISCOVERY. This board contains an STM32F303VCT6 microcontroller. This microcontroller has:
- A Cortex-M4F core that includes a single precision FPU
- 256 KiB of Flash located at address 0x0800_0000.
- 40 KiB of RAM located at address 0x2000_0000. (There's another RAM region but for simplicity we'll ignore it).
## Configuring
We'll start from scratch with a fresh template instance. Refer to the [previous section on QEMU] for a refresher on how to do this without `cargo-generate`.
[previous section on QEMU]: qemu.md
``` console
$ cargo generate --git https://github.com/rust-embedded/cortex-m-quickstart
Project Name: app
Creating project called `app`...
Done! New project created /tmp/app
$ cd app
```
Step number one is to set a default compilation target in `.cargo/config`.
``` console
$ tail -n5 .cargo/config
```
``` toml
# Pick ONE of these compilation targets
# target = "thumbv6m-none-eabi" # Cortex-M0 and Cortex-M0+
# target = "thumbv7m-none-eabi" # Cortex-M3
# target = "thumbv7em-none-eabi" # Cortex-M4 and Cortex-M7 (no FPU)
target = "thumbv7em-none-eabihf" # Cortex-M4F and Cortex-M7F (with FPU)
```
We'll use `thumbv7em-none-eabihf` as that covers the Cortex-M4F core.
The second step is to enter the memory region information into the `memory.x` file.
``` console
$ cat memory.x
/* Linker script for the STM32F303VCT6 */
MEMORY
{
/* NOTE 1 K = 1 KiBi = 1024 bytes */
FLASH : ORIGIN = 0x08000000, LENGTH = 256K
RAM : ORIGIN = 0x20000000, LENGTH = 40K
}
```
Make sure the `debug::exit()` call is commented out or removed, it is used only for running in QEMU.
```rust , ignore
#[entry]
fn main() -> ! {
hprintln!("Hello, world!").unwrap();
// exit QEMU
// NOTE do not run this on hardware; it can corrupt OpenOCD state
// debug::exit(debug::EXIT_SUCCESS);
loop {}
}
```
You can now cross compile programs using `cargo build` and inspect the binaries using `cargo-binutils` as you did before. The `cortex-m-rt` crate handles all the magic required to get your chip running, as helpfully, pretty much all Cortex-M CPUs boot in the same fashion.
``` console
$ cargo build --example hello
```
## Debugging
Debugging will look a bit different. In fact, the first steps can look different depending on the target device. In this section we'll show the steps required to debug a program running on the STM32F3DISCOVERY. This is meant to serve as a reference; for device specific information about debugging check out [the Debugonomicon](https://github.com/rust-embedded/debugonomicon).
As before we'll do remote debugging and the client will be a GDB process. This time, however, the server will be OpenOCD.
As done during the [verify] section connect the discovery board to your laptop / PC and check that the ST-LINK header is populated.
[verify]: ../intro/install/verify.md
On a terminal run `openocd` to connect to the ST-LINK on the discovery board. Run this command from the root of the template; `openocd` will pick up the `openocd.cfg` file which indicates which interface file and target file to use.
``` console
$ cat openocd.cfg
```
``` text
# Sample OpenOCD configuration for the STM32F3DISCOVERY development board
# Depending on the hardware revision you got you'll have to pick ONE of these
# interfaces. At any time only one interface should be commented out.
# Revision C (newer revision)
source [find interface/stlink-v2-1.cfg]
# Revision A and B (older revisions)
# source [find interface/stlink-v2.cfg]
source [find target/stm32f3x.cfg]
```
> **NOTE** If you found out that you have an older revision of the discovery board during the [verify] section then you should modify the `openocd.cfg` file at this point to use `interface/stlink-v2.cfg`.
``` console
$ openocd
Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v27 API v2 SWIM v15 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 2.913879
Info : stm32f3x.cpu: hardware has 6 breakpoints, 4 watchpoints
```
On another terminal run GDB, also from the root of the template.
``` console
$ <gdb> -q target/thumbv7em-none-eabihf/debug/examples/hello
```
Next connect GDB to OpenOCD, which is waiting for a TCP connection on port 3333.
``` console
(gdb) target remote :3333
Remote debugging using :3333
0x00000000 in ?? ()
```
Now proceed to *flash* (load) the program onto the microcontroller using the
`load` command.
``` console
(gdb) load
Loading section .vector_table, size 0x400 lma 0x8000000
Loading section .text, size 0x1e70 lma 0x8000400
Loading section .rodata, size 0x61c lma 0x8002270
Start address 0x800144e, load size 10380
Transfer rate: 17 KB/sec, 3460 bytes/write.
```
The program is now loaded. This program uses semihosting so before we do any semihosting call we have to tell OpenOCD to enable semihosting. You can send commands to OpenOCD using the `monitor` command.
``` console
(gdb) monitor arm semihosting enable
semihosting is enabled
```
> You can see all the OpenOCD commands by invoking the `monitor help` command.
Like before we can skip all the way to `main` using a breakpoint and the `continue` command.
``` console
(gdb) break main
Breakpoint 1 at 0x8000d18: file examples/hello.rs, line 15.
(gdb) continue
Continuing.
Note: automatically using hardware breakpoints for read-only addresses.
Breakpoint 1, main () at examples/hello.rs:15
15 let mut stdout = hio::hstdout().unwrap();
```
> **NOTE** If GDB blocks the terminal instead of hitting the breakpoint after you issue the `continue` command above, you might want to double check that the memory region information in the `memory.x` file is correctly set up for your device (both the starts *and* lengths).
Advancing the program with `next` should produce the same results as before.
``` console
(gdb) next
16 writeln!(stdout, "Hello, world!").unwrap();
(gdb) next
19 debug::exit(debug::EXIT_SUCCESS);
```
At this point you should see "Hello, world!" printed on the OpenOCD console, among other stuff.
``` console
$ openocd
(..)
Info : halted: PC: 0x08000e6c
Hello, world!
Info : halted: PC: 0x08000d62
Info : halted: PC: 0x08000d64
Info : halted: PC: 0x08000d66
Info : halted: PC: 0x08000d6a
Info : halted: PC: 0x08000a0c
Info : halted: PC: 0x08000d70
Info : halted: PC: 0x08000d72
```
Issuing another `next` will make the processor execute `debug::exit`. This acts as a breakpoint and halts the process:
``` console
(gdb) next
Program received signal SIGTRAP, Trace/breakpoint trap.
0x0800141a in __syscall ()
```
It also causes this to be printed to the OpenOCD console:
``` console
$ openocd
(..)
Info : halted: PC: 0x08001188
semihosting: *** application exited ***
Warn : target not halted
Warn : target not halted
target halted due to breakpoint, current mode: Thread
xPSR: 0x21000000 pc: 0x08000d76 msp: 0x20009fc0, semihosting
```
However, the process running on the microcontroller has not terminated and you can resume it using `continue` or a similar command.
You can now exit GDB using the `quit` command.
``` console
(gdb) quit
```
Debugging now requires a few more steps so we have packed all those steps into a single GDB script named `openocd.gdb`.
``` console
$ cat openocd.gdb
```
``` text
target remote :3333
# print demangled symbols
set print asm-demangle on
# detect unhandled exceptions, hard faults and panics
break DefaultHandler
break HardFault
break rust_begin_unwind
monitor arm semihosting enable
load
# start the process but immediately halt the processor
stepi
```
Now running `<gdb> -x openocd.gdb $program` will immediately connect GDB to OpenOCD, enable semihosting, load the program and start the process.
Alternatively, you can turn `<gdb> -x openocd.gdb` into a custom runner to make `cargo run` build a program *and* start a GDB session. This runner is included in `.cargo/config` but it's commented out.
``` console
$ head -n10 .cargo/config
```
``` toml
[target.thumbv7m-none-eabi]
# uncomment this to make `cargo run` execute programs on QEMU
# runner = "qemu-system-arm -cpu cortex-m3 -machine lm3s6965evb -nographic -semihosting-config enable=on,target=native -kernel"
[target.'cfg(all(target_arch = "arm", target_os = "none"))']
# uncomment ONE of these three option to make `cargo run` start a GDB session
# which option to pick depends on your system
runner = "arm-none-eabi-gdb -x openocd.gdb"
# runner = "gdb-multiarch -x openocd.gdb"
# runner = "gdb -x openocd.gdb"
```
``` console
$ cargo run --example hello
(..)
Loading section .vector_table, size 0x400 lma 0x8000000
Loading section .text, size 0x1e70 lma 0x8000400
Loading section .rodata, size 0x61c lma 0x8002270
Start address 0x800144e, load size 10380
Transfer rate: 17 KB/sec, 3460 bytes/write.
(gdb)
```

View File

@ -1,10 +1,5 @@
# Getting Started
# 入门
In this section we'll walk you through the process of writing, building,
flashing and debugging embedded programs. You will be able to try most of the
examples without any special hardware as we will show you the basics using
QEMU, a popular open-source hardware emulator. The only section where hardware
is required is, naturally enough, the [Hardware](./hardware.md) section,
where we use OpenOCD to program an [STM32F3DISCOVERY].
在本节中我们将引导您完成编写构建闪存和调试嵌入式程序的过程。您将能够在没有任何特殊硬件的情况下尝试大多数示例因为我们将使用流行的开源硬件仿真器QEMU向您展示基础知识。当然唯一需要硬件的部分是[Hardware](./hardware.md)部分在这里我们使用OpenOCD在[STM32F3DISCOVERY]上编程。
[STM32F3DISCOVERY]: http://www.st.com/en/evaluation-tools/stm32f3discovery.html
[STM32F3DISCOVERY]:http://www.st.com/en/evaluation-tools/stm32f3discovery.html

5
src/start/index_en.md Normal file
View File

@ -0,0 +1,5 @@
# Getting Started
In this section we'll walk you through the process of writing, building flashing and debugging embedded programs. You will be able to try most of the examples without any special hardware as we will show you the basics using QEMU, a popular open-source hardware emulator. The only section where hardware is required is, naturally enough, the [Hardware](./hardware.md) section, where we use OpenOCD to program an [STM32F3DISCOVERY].
[STM32F3DISCOVERY]: http://www.st.com/en/evaluation-tools/stm32f3discovery.html

View File

@ -1,32 +1,23 @@
# Interrupts
# 中断
Interrupts differ from exceptions in a variety of ways but their operation and
use is largely similar and they are also handled by the same interrupt
controller. Whereas exceptions are defined by the Cortex-M architecture,
interrupts are always vendor (and often even chip) specific implementations,
both in naming and functionality.
中断在很多方面与异常不同但是它们的操作和使用在很大程度上相似并且它们也由同一中断控制器处理。尽管异常是由Cortex-M架构定义的但是中断在命名和功能上始终是特定于供应商(甚至是芯片)的特定实现。
Interrupts do allow for a lot of flexibility which needs to be accounted for
when attempting to use them in an advanced way. We will not cover those uses in
this book, however it is a good idea to keep the following in mind:
中断确实具有很大的灵活性,在尝试以高级方式使用它们时需要考虑这些灵活性。我们不会在本书中介绍这些用法,但是请牢记以下几点:
* Interrupts have programmable priorities which determine their handlers' execution order
* Interrupts can nest and preempt, i.e. execution of an interrupt handler might be interrupted by another higher-priority interrupt
* In general the reason causing the interrupt to trigger needs to be cleared to prevent re-entering the interrupt handler endlessly
* 中断具有可编程的优先级,该优先级确定其处理程序的执行顺序
* 中断可以嵌套和抢占,即中断处理程序的执行可能会被另一个更高优先级的中断中断
* 通常需要清除导致中断触发的事件,以防止无限次重新进入中断处理程序
The general initialization steps at runtime are always the same:
* Setup the peripheral(s) to generate interrupts requests at the desired occasions
* Set the desired priority of the interrupt handler in the interrupt controller
* Enable the interrupt handler in the interrupt controller
中断的常规初始化步骤始终相同:
* 设置外设以在需要的情况下生成中断请求
* 在中断控制器中设置所需的中断处理程序优先级
* 在中断控制器中启用中断处理程序
Similarly to exceptions, the `cortex-m-rt` crate provides an [`interrupt`]
attribute to declare interrupt handlers. The available interrupts (and
their position in the interrupt handler table) are usually automatically
generated via `svd2rust` from a SVD description.
与异常类似,`cortex-m-rt` crate 提供了一个[`interrupt`]属性来声明中断处理程序。可用的中断(及其在中断处理程序表中的位置)通常是使用`svd2rust`基于SVD描述文件自动生成的。
[`interrupt`]: https://docs.rs/cortex-m-rt-macros/0.1.5/cortex_m_rt_macros/attr.interrupt.html
[`interrupt`]:https://docs.rs/cortex-m-rt-macros/0.1.5/cortex_m_rt_macros/attr.interrupt.html
``` rust,ignore
``` rust , ignore
// Interrupt handler for the Timer2 interrupt
#[interrupt]
fn TIM2() {
@ -35,16 +26,11 @@ fn TIM2() {
}
```
Interrupt handlers look like plain functions (except for the lack of arguments)
similar to exception handlers. However they can not be called directly by other
parts of the firmware due to the special calling conventions. It is however
possible to generate interrupt requests in software to trigger a diversion to
the interrupt handler.
中断处理程序类似于异常处理程序,它们不能被固件的其他部分直接调用。但是可以在软件中生成中断请求,以触发对中断处理程序。
Similar to exception handlers it is also possible to declare `static mut`
variables inside the interrupt handlers for *safe* state keeping.
与异常处理程序类似,在中断处理程序中声明`static mut`变量也是安全的。
``` rust,ignore
``` rust , ignore
#[interrupt]
fn TIM2() {
static mut COUNT: u32 = 0;
@ -54,7 +40,6 @@ fn TIM2() {
}
```
For a more detailed description about the mechanisms demonstrated here please
refer to the [exceptions section].
有关此处演示的机制的更详细说明,请参考[异常]。
[exceptions section]: ./exceptions.md
[异常]:./exceptions.md

View File

@ -0,0 +1,45 @@
# Interrupts
Interrupts differ from exceptions in a variety of ways but their operation and use is largely similar and they are also handled by the same interrupt controller. Whereas exceptions are defined by the Cortex-M architecture, interrupts are always vendor (and often even chip) specific implementations, both in naming and functionality.
Interrupts do allow for a lot of flexibility which needs to be accounted for when attempting to use them in an advanced way. We will not cover those uses in this book, however it is a good idea to keep the following in mind:
* Interrupts have programmable priorities which determine their handlers' execution order
* Interrupts can nest and preempt, i.e. execution of an interrupt handler might be interrupted by another higher-priority interrupt
* In general the reason causing the interrupt to trigger needs to be cleared to prevent re-entering the interrupt handler endlessly
The general initialization steps at runtime are always the same:
* Setup the peripheral(s) to generate interrupts requests at the desired occasions
* Set the desired priority of the interrupt handler in the interrupt controller
* Enable the interrupt handler in the interrupt controller
Similarly to exceptions, the `cortex-m-rt` crate provides an [`interrupt`] attribute to declare interrupt handlers. The available interrupts (and their position in the interrupt handler table) are usually automatically generated via `svd2rust` from a SVD description.
[`interrupt`]: https://docs.rs/cortex-m-rt-macros/0.1.5/cortex_m_rt_macros/attr.interrupt.html
``` rust , ignore
// Interrupt handler for the Timer2 interrupt
#[interrupt]
fn TIM2() {
// ..
// Clear reason for the generated interrupt request
}
```
Interrupt handlers look like plain functions (except for the lack of arguments) similar to exception handlers. However they can not be called directly by other parts of the firmware due to the special calling conventions. It is however possible to generate interrupt requests in software to trigger a diversion to the interrupt handler.
Similar to exception handlers it is also possible to declare `static mut` variables inside the interrupt handlers for *safe* state keeping.
``` rust , ignore
#[interrupt]
fn TIM2() {
static mut COUNT: u32 = 0;
// `COUNT` has type `&mut u32` and it's safe to use
*COUNT += 1;
}
```
For a more detailed description about the mechanisms demonstrated here please refer to the [exceptions section].
[exceptions section]: ./exceptions.md

View File

@ -1,51 +1,33 @@
# Panicking
# 恐慌(Panicking)
Panicking is a core part of the Rust language. Built-in operations like indexing
are runtime checked for memory safety. When out of bounds indexing is attempted
this results in a panic.
恐慌是Rust语言的核心部分。诸如索引之类的内置操作会在运行时检查内存安全性。当尝试超出索引范围时这将导致恐慌。
In the standard library panicking has a defined behavior: it unwinds the stack
of the panicking thread, unless the user opted for aborting the program on
panics.
在标准库中,恐慌具有确定的行为:恐慌会进行线程栈展开,除非用户选择在恐慌中中止程序。
In programs without standard library, however, the panicking behavior is left
undefined. A behavior can be chosen by declaring a `#[panic_handler]` function.
This function must appear exactly *once* in the dependency graph of a program,
and must have the following signature: `fn(&PanicInfo) -> !`, where [`PanicInfo`]
is a struct containing information about the location of the panic.
但是,在没有标准库的程序中,恐慌行为未定义。可以通过声明一个 `#[panic_handler]` 函数来选择一种行为。该函数必须在程序的依赖关系中恰好出现一次,并且必须具有以下签名:`fn(PanicInfo)->`,其中[`PanicInfo`]包含有恐慌相关的位置信息 。
[`PanicInfo`]: https://doc.rust-lang.org/core/panic/struct.PanicInfo.html
[`PanicInfo`]:https://doc.rust-lang.org/core/panic/struct.PanicInfo.html
Given that embedded systems range from user facing to safety critical (cannot
crash) there's no one size fits all panicking behavior but there are plenty of
commonly used behaviors. These common behaviors have been packaged into crates
that define the `#[panic_handler]` function. Some examples include:
鉴于嵌入式系统的范围广泛,从消费类电子到对安全至关重要的系统(不能崩溃),因此没有一种适合所有场景的恐慌处理行为,但是有许多常用行为。这些常见的行为已被打包到定义 `#[panic_handler]` 功能的crate中,常见的包括:
- [`panic-abort`]. A panic causes the abort instruction to be executed.
- [`panic-halt`]. A panic causes the program, or the current thread, to halt by
entering an infinite loop.
- [`panic-itm`]. The panicking message is logged using the ITM, an ARM Cortex-M
specific peripheral.
- [`panic-semihosting`]. The panicking message is logged to the host using the
semihosting technique.
- [`panic-abort`] 恐慌时会执行abort指令。
- [`panic-halt`] 恐慌时会导致程序或者其所在线程通过进入死循环的方式停止。
- [`panic-itm`] 恐慌消息使用ITM(ARM Cortex-M特定的外围设备)记录。
- [`panic-semihosting`] 恐慌消息使用半主机技术记录到主机。
[`panic-abort`]: https://crates.io/crates/panic-abort
[`panic-halt`]: https://crates.io/crates/panic-halt
[`panic-itm`]: https://crates.io/crates/panic-itm
[`panic-semihosting`]: https://crates.io/crates/panic-semihosting
[`panic-abort`]:https//crates.io/crates/panic-abort
[`panic-halt`]:https://crates.io/crates/panic-halt
[`panic-itm`]:https://crates.io/crates/panic-itm
[`panic-semihosting`]:https://crates.io/crates/panic-semihosting
You may be able to find even more crates searching for the [`panic-handler`]
keyword on crates.io.
在crates.io上搜索[`panic-handler`]您也许可以找到更多的crate。
[`panic-handler`]: https://crates.io/keywords/panic-handler
[`panic-handler`]:https://crates.io/keywords/panic-handler
A program can pick one of these behaviors simply by linking to the corresponding
crate. The fact that the panicking behavior is expressed in the source of
an application as a single line of code is not only useful as documentation but
can also be used to change the panicking behavior according to the compilation
profile. For example:
程序可以简单地通过链接到相应的crate来选择其中一种行为,还可以根据编译配置文件更改恐慌行为。例如:
``` rust,ignore
``` rust , ignore
#![no_main]
#![no_std]
@ -60,16 +42,13 @@ extern crate panic_abort;
// ..
```
In this example the crate links to the `panic-halt` crate when built with the
dev profile (`cargo build`), but links to the `panic-abort` crate when built
with the release profile (`cargo build --release`).
在此示例中,使用开发人员配置文件(`cargo build`)时,板条箱链接到`panic-halt`板条箱,而当使用发布配置文件构建时,则链接到`panic-abort`板条箱(`cargo build --release `)。
## An example
## 一个例子
Here's an example that tries to index an array beyond its length. The operation
results in a panic.
这是一个尝试索引超出的示例。该操作导致恐慌。
```rust,ignore
```rust , ignore
#![no_main]
#![no_std]
@ -87,8 +66,7 @@ fn main() -> ! {
}
```
This example chose the `panic-semihosting` behavior which prints the panic
message to the host console using semihosting.
本示例选择了`panic-semihosting`恐慌处理行为,该行为将恐慌消息打印到主机控制台。
``` console
$ cargo run
@ -96,5 +74,4 @@ $ cargo run
panicked at 'index out of bounds: the len is 3 but the index is 4', src/main.rs:12:13
```
You can try changing the behavior to `panic-halt` and confirm that no message is
printed in that case.
您可以尝试将行为更改为`panic-halt` ,并确认在这种情况下不打印任何消息。

76
src/start/panicking_en.md Normal file
View File

@ -0,0 +1,76 @@
# Panicking
Panicking is a core part of the Rust language. Built-in operations like indexing are runtime checked for memory safety. When out of bounds indexing is attempted this results in a panic.
In the standard library panicking has a defined behavior: it unwinds the stack of the panicking thread, unless the user opted for aborting the program on panics.
In programs without standard library, however, the panicking behavior is left undefined. A behavior can be chosen by declaring a `#[panic_handler]` function. This function must appear exactly *once* in the dependency graph of a program, and must have the following signature: `fn(&PanicInfo) -> !`, where [`PanicInfo`] is a struct containing information about the location of the panic.
[`PanicInfo`]: https://doc.rust-lang.org/core/panic/struct.PanicInfo.html
Given that embedded systems range from user facing to safety critical (cannot crash) there's no one size fits all panicking behavior but there are plenty of commonly used behaviors. These common behaviors have been packaged into crates that define the `#[panic_handler]` function. Some examples include:
- [`panic-abort`]. A panic causes the abort instruction to be executed.
- [`panic-halt`]. A panic causes the program, or the current thread, to halt by entering an infinite loop.
- [`panic-itm`]. The panicking message is logged using the ITM, an ARM Cortex-M specific peripheral.
- [`panic-semihosting`]. The panicking message is logged to the host using the semihosting technique.
[`panic-abort`]: https://crates.io/crates/panic-abort
[`panic-halt`]: https://crates.io/crates/panic-halt
[`panic-itm`]: https://crates.io/crates/panic-itm
[`panic-semihosting`]: https://crates.io/crates/panic-semihosting
You may be able to find even more crates searching for the [`panic-handler`] keyword on crates.io.
[`panic-handler`]: https://crates.io/keywords/panic-handler
A program can pick one of these behaviors simply by linking to the corresponding crate. The fact that the panicking behavior is expressed in the source of an application as a single line of code is not only useful as documentation but can also be used to change the panicking behavior according to the compilation profile. For example:
``` rust , ignore
#![no_main]
#![no_std]
// dev profile: easier to debug panics; can put a breakpoint on `rust_begin_unwind`
#[cfg(debug_assertions)]
extern crate panic_halt;
// release profile: minimize the binary size of the application
#[cfg(not(debug_assertions))]
extern crate panic_abort;
// ..
```
In this example the crate links to the `panic-halt` crate when built with the dev profile (`cargo build`), but links to the `panic-abort` crate when built with the release profile (`cargo build --release`).
## An example
Here's an example that tries to index an array beyond its length. The operation results in a panic.
```rust , ignore
#![no_main]
#![no_std]
extern crate panic_semihosting;
use cortex_m_rt::entry;
#[entry]
fn main() -> ! {
let xs = [0, 1, 2];
let i = xs.len() + 1;
let _y = xs[i]; // out of bounds access
loop {}
}
```
This example chose the `panic-semihosting` behavior which prints the panic message to the host console using semihosting.
``` console
$ cargo run
Running `qemu-system-arm -cpu cortex-m3 -machine lm3s6965evb (..)
panicked at 'index out of bounds: the len is 3 but the index is 4', src/main.rs:12:13
```
You can try changing the behavior to `panic-halt` and confirm that no message is printed in that case.

View File

@ -1,48 +1,45 @@
# QEMU
We'll start writing a program for the [LM3S6965], a Cortex-M3 microcontroller.
We have chosen this as our initial target because it [can be emulated](https://wiki.qemu.org/Documentation/Platforms/ARM#Supported_in_qemu-system-arm) using QEMU
so you don't need to fiddle with hardware in this section and we can focus on
the tooling and the development process.
我们现在开始为Cortex-M3微控制器[LM3S6965]编写程序。我们选择这个作为我们的最初目标是因为它可以使用QEMU [模拟](https://wiki.qemu.org/Documentation/Platforms/ARM#Supported_in_qemu-system-arm),因此在本节中您无需关心硬件,只需专注于工具和开发过程。
[LM3S6965]: http://www.ti.com/product/LM3S6965
[LM3S6965]:http://www.ti.com/product/LM3S6965
**IMPORTANT**
We'll use the name "app" for the project name in this tutorial.
Whenever you see the word "app" you should replace it with the name you selected
for your project. Or, you could also name your project "app" and avoid the
substitutions.
**重要**
在本教程中我们将名称“app”用作项目名称。每当您看到“app”一词时都应将其替换为自己的项目名称。或者您也可以将项目命名为“app”以避免替换。
## Creating a non standard Rust program
## 创建一个非标准的Rust程序
We'll use the [`cortex-m-quickstart`] project template to generate a new
project from it.
我们将使用[`cortex-m-quickstart`]项目模板生成一个新项目。
[`cortex-m-quickstart`]:https://github.com/rust-embedded/cortex-m-quickstart
### 使用`cargo-generate`
首先安装cargo-generate
[`cortex-m-quickstart`]: https://github.com/rust-embedded/cortex-m-quickstart
### Using `cargo-generate`
First install cargo-generate
```console
cargo install cargo-generate
```
Then generate a new project
然后生成一个新项目
```console
cargo generate --git https://github.com/rust-embedded/cortex-m-quickstart
```
```text
Project Name: app
Creating project called `app`...
Done! New project created /tmp/app
```
```console
cd app
```
### Using `git`
### 使用`git`
Clone the repository
克隆存储库
```console
git clone https://github.com/rust-embedded/cortex-m-quickstart app
@ -66,9 +63,9 @@ test = false
bench = false
```
### Using neither
### 手工下载
Grab the latest snapshot of the `cortex-m-quickstart` template and extract it.
获取`cortex-m-quickstart`模板的最新快照并解压缩它。
```console
curl -LO https://github.com/rust-embedded/cortex-m-quickstart/archive/master.zip
@ -77,17 +74,15 @@ mv cortex-m-quickstart-master app
cd app
```
Or you can browse to [`cortex-m-quickstart`], click the green "Clone or
download" button and then click "Download ZIP".
或者,您可以浏览到[`cortex-m-quickstart`]单击绿色的“克隆或下载”按钮然后单击“下载ZIP”。
Then fill in the placeholders in the `Cargo.toml` file as done in the second
part of the "Using `git`" version.
然后按照“使用git”一节中的第二部分中的操作在Cargo.toml文件中填写占位符。
## Program Overview
## 程序概述
For convenience here are the most important parts of the source code in `src/main.rs`:
为了方便起见,这是`src/main.rs`中源代码的最重要部分:
```rust,ignore
```rust , ignore
#![no_std]
#![no_main]
@ -103,39 +98,24 @@ fn main() -> ! {
}
```
This program is a bit different from a standard Rust program so let's take a
closer look.
该程序与标准Rust程序有点不同因此让我们仔细看一下。
`#![no_std]` indicates that this program will *not* link to the standard crate,
`std`. Instead it will link to its subset: the `core` crate.
`#![no_std]`表示此程序*不会*链接到标准库,而是链接到其子集:`core` crate。
`#![no_main]` indicates that this program won't use the standard `main`
interface that most Rust programs use. The main (no pun intended) reason to go
with `no_main` is that using the `main` interface in `no_std` context requires
nightly.
`#![no_main]`表示该程序将不使用大多数Rust程序使用的标准`main`接口。使用no_main的主要原因是在no_std上下文中使用main函数需要Rust的nightly版本。
`extern crate panic_halt;`. This crate provides a `panic_handler` that defines
the panicking behavior of the program. We will cover this in more detail in the
[Panicking](panicking.md) chapter of the book.
`extern crate panic_halt;`。这个crate提供了一个 `panic_handler`,它定义了程序的恐慌行为。我们将在本书的[Panicking](panicking.md)一章中对此进行详细介绍。
[`#[entry]`][entry] is an attribute provided by the [`cortex-m-rt`] crate that's used
to mark the entry point of the program. As we are not using the standard `main`
interface we need another way to indicate the entry point of the program and
that'd be `#[entry]`.
[`#[entry]`][entry]是[`cortex-m-rt`]crate提供的属性用于标记程序的入口点。由于我们没有使用标准的“ main”接口因此需要另一种方式来指示程序的入口点`#[entry]`
[entry]: https://docs.rs/cortex-m-rt-macros/latest/cortex_m_rt_macros/attr.entry.html
[`cortex-m-rt`]: https://crates.io/crates/cortex-m-rt
[entry]:https://docs.rs/cortex-m-rt-macros/latest/cortex_m_rt_macros/attr.entry.html
[`cortex-m-rt`]:https://crates.io/crates/cortex-m-rt
`fn main() -> !`. Our program will be the *only* process running on the target
hardware so we don't want it to end! We use a [divergent function](https://doc.rust-lang.org/rust-by-example/fn/diverging.html) (the `-> !`
bit in the function signature) to ensure at compile time that'll be the case.
注意main函数的签名是`fn main() -> !` ,因为我们的程序将是目标硬件上唯一的程序,所以我们不希望它结束​​!我们使用[发散函数](https://doc.rust-lang.org/rust-by-example/fn/diverging.html)(函数签名中的`->`表示没有返回值)来在编译时确保main不会结束。
## Cross compiling
## 交叉编译
The next step is to *cross* compile the program for the Cortex-M3 architecture.
That's as simple as running `cargo build --target $TRIPLE` if you know what the
compilation target (`$TRIPLE`) should be. Luckily, the `.cargo/config` in the
template has the answer:
下一步是交叉编译针对Cortex-M3架构的程序。如果您知道编译目标($TRIPLE)应该是什么,那就直接运行`cargo build --target $ TRIPLE`。不知道也没关系,模板项目中的.cargo/config里有答案
```console
tail -n6 .cargo/config
@ -150,30 +130,26 @@ target = "thumbv7m-none-eabi" # Cortex-M3
# target = "thumbv7em-none-eabihf" # Cortex-M4F and Cortex-M7F (with FPU)
```
To cross compile for the Cortex-M3 architecture we have to use
`thumbv7m-none-eabi`. This compilation target has been set as the default so the
two commands below do the same:
为了针对Cortex-M3架构进行交叉编译我们必须使用`thumbv7m-none-eabi`。该编译目标已设置为默认目标,因此以下两个命令具有相同的功能:
```console
cargo build --target thumbv7m-none-eabi
cargo build
```
## Inspecting
## 检查
Now we have a non-native ELF binary in `target/thumbv7m-none-eabi/debug/app`. We
can inspect it using `cargo-binutils`.
现在我们在`target/thumbv7m-none-eabi/debug/app`中有一个非本地的ELF二进制文件。我们可以使用`cargo-binutils`检查它。
With `cargo-readobj` we can print the ELF headers to confirm that this is an ARM
binary.
使用`cargo-readobj`我们可以打印ELF头以确认这是一个ARM二进制文件。
``` console
cargo readobj --bin app -- -file-headers
```
Note that:
* `--bin app` is sugar for inspect the binary at `target/$TRIPLE/debug/app`
* `--bin app` will also (re)compile the binary, if necessary
注意:
*`--bin app`是用于检查``target/$TRIPLE/debug/app`这个二进制文件
*`--bin app`还会在必要时(重新)编译二进制文件
``` text
@ -199,15 +175,18 @@ ELF Header:
Section header string table index: 18
```
`cargo-size` can print the size of the linker sections of the binary.
`cargo-size`可以打印二进制文件的链接器部分的大小。
> **注意**此输出假定已经合并了rust-embedd/cortex-m-rt111
!todo 这句话啥意思啊?
> **NOTE** this output assumes that rust-embedded/cortex-m-rt#111 has been
> merged
```console
cargo size --bin app --release -- -A
```
we use `--release` to inspect the optimized version
我们使用`--release`检查优化过的版本
``` text
app :
@ -231,33 +210,24 @@ section size addr
Total 14570
```
> A refresher on ELF linker sections
>关于ELF链接器部分的复习
>
> - `.text` contains the program instructions
> - `.rodata` contains constant values like strings
> - `.data` contains statically allocated variables whose initial values are
> *not* zero
> - `.bss` also contains statically allocated variables whose initial values
> *are* zero
> - `.vector_table` is a *non*-standard section that we use to store the vector
> (interrupt) table
> - `.ARM.attributes` and the `.debug_*` sections contain metadata and will
> *not* be loaded onto the target when flashing the binary.
>- `.text`包含程序说明
>- `.rodata`包含常量值,例如字符串
>- `.data`包含静态分配的变量,其初始值为非零
>- `.bss`也包含静态分配的变量,其初始值为零
>- `.vector_table`是非标准部分,用于存储中断向量表
>- `.ARM.attributes`和`.debug_ *`部分包含元数据这部分数据不会写入目标开发板的flash上。
**IMPORTANT**: ELF files contain metadata like debug information so their *size
on disk* does *not* accurately reflect the space the program will occupy when
flashed on a device. *Always* use `cargo-size` to check how big a binary really
is.
**重要**ELF文件包含诸如调试信息之类的元数据因此它们在磁盘上的大小不会准确地反映程序在设备上真实占用的空间,因此应该总是使用`cargo-size`来检查二进制文件的真正大小。
`cargo-objdump` can be used to disassemble the binary.
`cargo-objdump` 可用于反汇编二进制文件。
```console
cargo objdump --bin app --release -- -disassemble -no-show-raw-insn -print-imm-hex
```
> **NOTE** this output can differ on your system. New versions of rustc, LLVM
> and libraries can generate different assembly. We truncated some of the instructions
> to keep the snippet small.
> **注意**此输出在您的系统上可能会有所不同。 不同版本的rustcLLVM和库都会生成不同的程序集。另外,由于空间问题,我们也对内容做了删减。
```text
app: file format ELF32-arm-little
@ -298,14 +268,13 @@ HardFault:
663: <unknown>
```
## Running
## 运行
Next, let's see how to run an embedded program on QEMU! This time we'll use the
`hello` example which actually does something.
接下来让我们看看如何在QEMU上运行嵌入式程序这次我们将使用`hello`示例。
For convenience here's the source code of `examples/hello.rs`:
为了方便起见,这是`examples/hello.rs`的源代码:
```rust,ignore
```rust , ignore
//! Prints "Hello, world!" on the host console using semihosting
#![no_main]
@ -328,20 +297,17 @@ fn main() -> ! {
}
```
This program uses something called semihosting to print text to the *host*
console. When using real hardware this requires a debug session but when using
QEMU this Just Works.
该程序使用一种称为半主机(semihosting)的方式将文本打印到*host*控制台。在使用实际硬件时这需要调试会话支持但是在使用QEMU时直接使用就行了。
Let's start by compiling the example:
让我们从编译示例开始:
```console
cargo build --example hello
```
The output binary will be located at
`target/thumbv7m-none-eabi/debug/examples/hello`.
输出二进制文件将位于`target/thumbv7m-none-eabi/debug/examples/hello`。
To run this binary on QEMU run the following command:
要在QEMU上运行此二进制文件请运行以下命令
```console
qemu-system-arm \
@ -356,8 +322,7 @@ qemu-system-arm \
Hello, world!
```
The command should successfully exit (exit code = 0) after printing the text. On
*nix you can check that with the following command:
打印文本后该命令应成功退出退出代码为0)。在*nix上您可以使用以下命令进行检查
```console
echo $?
@ -367,32 +332,21 @@ echo $?
0
```
Let's break down that QEMU command:
让我们分解一下QEMU命令
- `qemu-system-arm`. This is the QEMU emulator. There are a few variants of
these QEMU binaries; this one does full *system* emulation of *ARM* machines
hence the name.
-`qemu-system-arm` 这是QEMU仿真器。QEMU支持很多不同架构的处理。从名字可以看出,这是ARM处理器的完整仿真。
- `-cpu cortex-m3`. This tells QEMU to emulate a Cortex-M3 CPU. Specifying the
CPU model lets us catch some miscompilation errors: for example, running a
program compiled for the Cortex-M4F, which has a hardware FPU, will make QEMU
error during its execution.
-`-cpu cortex-m3`。这告诉QEMU模拟Cortex-M3 CPU。指定CPU型号可以让我们捕获一些错误编译错误例如运行针对具有硬件FPU的Cortex-M4F编译的程序QEMU将在其运行期间产生错误。
- `-machine lm3s6965evb`. This tells QEMU to emulate the LM3S6965EVB, a
evaluation board that contains a LM3S6965 microcontroller.
-`-machine lm3s6965evb`。这告诉QEMU模拟LM3S6965EVB这是一个包含LM3S6965微控制器的开发板。
- `-nographic`. This tells QEMU to not launch its GUI.
-`-nographic`。这告诉QEMU不要启动其GUI。
- `-semihosting-config (..)`. This tells QEMU to enable semihosting. Semihosting
lets the emulated device, among other things, use the host stdout, stderr and
stdin and create files on the host.
-`-semihosting-config(..)`。这告诉QEMU启用半主机。半主机使仿真设备可以使用主机stdoutstderr和stdin并在主机上创建文件。
- `-kernel $file`. This tells QEMU which binary to load and run on the emulated
machine.
-`-kernel $file`。这告诉QEMU在模拟机上加载并运行哪个二进制文件。
Typing out that long QEMU command is too much work! We can set a custom runner
to simplify the process. `.cargo/config` has a commented out runner that invokes
QEMU; let's uncomment it:
输入这么长的QEMU命令太麻烦了我们可以设置一个自定义运行器以简化过程。`.cargo/config`有一行启动 QEMU的运行器被注释掉了,让我们去掉这行注释:
```console
head -n3 .cargo/config
@ -404,9 +358,7 @@ head -n3 .cargo/config
runner = "qemu-system-arm -cpu cortex-m3 -machine lm3s6965evb -nographic -semihosting-config enable=on,target=native -kernel"
```
This runner only applies to the `thumbv7m-none-eabi` target, which is our
default compilation target. Now `cargo run` will compile the program and run it
on QEMU:
该运行器仅适用于`thumbv7m-none-eabi`目标,这是我们的默认编译目标。现在直接运行`cargo run`就会编译程序并在QEMU上运行
```console
cargo run --example hello --release
@ -419,21 +371,17 @@ cargo run --example hello --release
Hello, world!
```
## Debugging
## 调试
Debugging is critical to embedded development. Let's see how it's done.
调试对于嵌入式开发至关重要。让我们看看它是如何完成的。
Debugging an embedded device involves *remote* debugging as the program that we
want to debug won't be running on the machine that's running the debugger
program (GDB or LLDB).
调试嵌入式设备涉及远程调试,因为要调试的程序不会在运行调试器程序(GDB或LLDB)的计算机上运行。
Remote debugging involves a client and a server. In a QEMU setup, the client
will be a GDB (or LLDB) process and the server will be the QEMU process that's
also running the embedded program.
远程调试涉及客户端和服务器。针对QEMU客户端将是GDB(或LLDB)进程而服务器将是运行嵌入式程序的QEMU进程。
In this section we'll use the `hello` example we already compiled.
在本节中,我们将使用已经编译的“ hello”示例。
The first debugging step is to launch QEMU in debugging mode:
调试的第一步是在调试模式下启动QEMU
```console
qemu-system-arm \
@ -446,29 +394,22 @@ qemu-system-arm \
-kernel target/thumbv7m-none-eabi/debug/examples/hello
```
This command won't print anything to the console and will block the terminal. We
have passed two extra flags this time:
此命令不会在控制台上显示任何内容,并且会阻塞终端。这次我们额外传递了两个参数:
- `-gdb tcp::3333`. This tells QEMU to wait for a GDB connection on TCP
port 3333.
- `-gdb tcp::3333`。这告诉QEMU监听TCP端口3333,等待GDB的连接。
- `-S`. This tells QEMU to freeze the machine at startup. Without this the
program would have reached the end of main before we had a chance to launch
the debugger!
- `-S` 这告诉QEMU在启动时冻结计算机。没有这个可能我们还没有来得及启动调试器,程序就已经结束了!
接下来我们在另一个终端中启动GDB并告诉它加载示例的调试符号
Next we launch GDB in another terminal and tell it to load the debug symbols of
the example:
```console
gdb-multiarch -q target/thumbv7m-none-eabi/debug/examples/hello
```
**NOTE**: you might need another version of gdb instead of `gdb-multiarch` depending
on which one you installed in the installation chapter. This could also be
`arm-none-eabi-gdb` or just `gdb`.
**注意**您可能需要其他版本的gdb而不是`gdb-multiarch`,具体取决于在安装一章中你安装的版本。也可能是`arm-none-eabi-gdb`或直接是`gdb`。
Then within the GDB shell we connect to QEMU, which is waiting for a connection
on TCP port 3333.
然后在GDB Shell中我们连接到QEMU它正在TCP端口3333上等待连接。
```console
target remote :3333
@ -480,12 +421,9 @@ Reset () at $REGISTRY/cortex-m-rt-0.6.1/src/lib.rs:473
473 pub unsafe extern "C" fn Reset() -> ! {
```
You'll see that the process is halted and that the program counter is pointing
to a function named `Reset`. That is the reset handler: what Cortex-M cores
execute upon booting.
您会看到该进程已停止,并且程序计数器指向了一个名为“ Reset”的函数。那就是重启入口即Cortex-M启动时执行程序的入口。
This reset handler will eventually call our main function. Let's skip all the
way there using a breakpoint and the `continue` command:
该函数最终将调用我们的main函数。让我们使用断点和`continue`命令一路跳过:
```console
break main
@ -506,8 +444,7 @@ Breakpoint 1, main () at examples/hello.rs:17
17 let mut stdout = hio::hstdout().unwrap();
```
We are now close to the code that prints "Hello, world!". Let's move forward
using the `next` command.
我们现在接近打印“ Helloworld”的代码。让我们继续使用“ next”命令。
``` console
next
@ -525,15 +462,15 @@ next
20 debug::exit(debug::EXIT_SUCCESS);
```
At this point you should see "Hello, world!" printed on the terminal that's
running `qemu-system-arm`.
此时,您应该在运行`qemu-system-arm`的终端上看到"Hello, world!"。
```text
$ qemu-system-arm (..)
Hello, world!
```
Calling `next` again will terminate the QEMU process.
再次调用`next`将终止QEMU过程。
```console
next
@ -543,7 +480,7 @@ next
[Inferior 1 (Remote target) exited normally]
```
You can now exit the GDB session.
现在您可以退出GDB会话。
``` console
quit

484
src/start/qemu_en.md Normal file
View File

@ -0,0 +1,484 @@
# QEMU
We'll start writing a program for the [LM3S6965], a Cortex-M3 microcontroller. We have chosen this as our initial target because it [can be emulated](https://wiki.qemu.org/Documentation/Platforms/ARM#Supported_in_qemu-system-arm) using QEMU so you don't need to fiddle with hardware in this section and we can focus on the tooling and the development process.
[LM3S6965]: http://www.ti.com/product/LM3S6965
**IMPORTANT**
We'll use the name "app" for the project name in this tutorial. Whenever you see the word "app" you should replace it with the name you selected for your project. Or, you could also name your project "app" and avoid the substitutions.
## Creating a non standard Rust program
We'll use the [`cortex-m-quickstart`] project template to generate a new project from it.
[`cortex-m-quickstart`]: https://github.com/rust-embedded/cortex-m-quickstart
### Using `cargo-generate`
First install cargo-generate
```console
cargo install cargo-generate
```
Then generate a new project
```console
cargo generate --git https://github.com/rust-embedded/cortex-m-quickstart
```
```text
Project Name: app
Creating project called `app`...
Done! New project created /tmp/app
```
```console
cd app
```
### Using `git`
Clone the repository
```console
git clone https://github.com/rust-embedded/cortex-m-quickstart app
cd app
```
And then fill in the placeholders in the `Cargo.toml` file
```toml
[package]
authors = ["{{authors}}"] # "{{authors}}" -> "John Smith"
edition = "2018"
name = "{{project-name}}" # "{{project-name}}" -> "awesome-app"
version = "0.1.0"
# ..
[[bin]]
name = "{{project-name}}" # "{{project-name}}" -> "awesome-app"
test = false
bench = false
```
### Using neither
Grab the latest snapshot of the `cortex-m-quickstart` template and extract it.
```console
curl -LO https://github.com/rust-embedded/cortex-m-quickstart/archive/master.zip
unzip master.zip
mv cortex-m-quickstart-master app
cd app
```
Or you can browse to [`cortex-m-quickstart`], click the green "Clone or download" button and then click "Download ZIP".
Then fill in the placeholders in the `Cargo.toml` file as done in the second part of the "Using `git`" version.
## Program Overview
For convenience here are the most important parts of the source code in `src/main.rs`:
```rust , ignore
#![no_std]
#![no_main]
extern crate panic_halt;
use cortex_m_rt::entry;
#[entry]
fn main() -> ! {
loop {
// your code goes here
}
}
```
This program is a bit different from a standard Rust program so let's take a closer look.
`#![no_std]` indicates that this program will *not* link to the standard crate `std`. Instead it will link to its subset: the `core` crate.
`#![no_main]` indicates that this program won't use the standard `main` interface that most Rust programs use. The main (no pun intended) reason to go with `no_main` is that using the `main` interface in `no_std` context requires nightly.
`extern crate panic_halt;`. This crate provides a `panic_handler` that defines the panicking behavior of the program. We will cover this in more detail in the [Panicking](panicking.md) chapter of the book.
[`#[entry]`][entry] is an attribute provided by the [`cortex-m-rt`] crate that's used to mark the entry point of the program. As we are not using the standard `main` interface we need another way to indicate the entry point of the program and that'd be `#[entry]`.
[entry]: https://docs.rs/cortex-m-rt-macros/latest/cortex_m_rt_macros/attr.entry.html
[`cortex-m-rt`]: https://crates.io/crates/cortex-m-rt
`fn main() -> !`. Our program will be the *only* process running on the target hardware so we don't want it to end! We use a [divergent function](https://doc.rust-lang.org/rust-by-example/fn/diverging.html) (the `-> !` bit in the function signature) to ensure at compile time that'll be the case.
## Cross compiling
The next step is to *cross* compile the program for the Cortex-M3 architecture. That's as simple as running `cargo build --target $TRIPLE` if you know what the compilation target (`$TRIPLE`) should be. Luckily, the `.cargo/config` in the template has the answer:
```console
tail -n6 .cargo/config
```
```toml
[build]
# Pick ONE of these compilation targets
# target = "thumbv6m-none-eabi" # Cortex-M0 and Cortex-M0+
target = "thumbv7m-none-eabi" # Cortex-M3
# target = "thumbv7em-none-eabi" # Cortex-M4 and Cortex-M7 (no FPU)
# target = "thumbv7em-none-eabihf" # Cortex-M4F and Cortex-M7F (with FPU)
```
To cross compile for the Cortex-M3 architecture we have to use `thumbv7m-none-eabi`. This compilation target has been set as the default so the two commands below do the same:
```console
cargo build --target thumbv7m-none-eabi
cargo build
```
## Inspecting
Now we have a non-native ELF binary in `target/thumbv7m-none-eabi/debug/app`. We can inspect it using `cargo-binutils`.
With `cargo-readobj` we can print the ELF headers to confirm that this is an ARM binary.
``` console
cargo readobj --bin app -- -file-headers
```
Note that:
* `--bin app` is sugar for inspect the binary at `target/$TRIPLE/debug/app`
* `--bin app` will also (re)compile the binary, if necessary
``` text
ELF Header:
Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00
Class: ELF32
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0x0
Type: EXEC (Executable file)
Machine: ARM
Version: 0x1
Entry point address: 0x405
Start of program headers: 52 (bytes into file)
Start of section headers: 153204 (bytes into file)
Flags: 0x5000200
Size of this header: 52 (bytes)
Size of program headers: 32 (bytes)
Number of program headers: 2
Size of section headers: 40 (bytes)
Number of section headers: 19
Section header string table index: 18
```
`cargo-size` can print the size of the linker sections of the binary.
> **NOTE** this output assumes that rust-embedded/cortex-m-rt#111 has been
> merged
```console
cargo size --bin app --release -- -A
```
we use `--release` to inspect the optimized version
``` text
app :
section size addr
.vector_table 1024 0x0
.text 92 0x400
.rodata 0 0x45c
.data 0 0x20000000
.bss 0 0x20000000
.debug_str 2958 0x0
.debug_loc 19 0x0
.debug_abbrev 567 0x0
.debug_info 4929 0x0
.debug_ranges 40 0x0
.debug_macinfo 1 0x0
.debug_pubnames 2035 0x0
.debug_pubtypes 1892 0x0
.ARM.attributes 46 0x0
.debug_frame 100 0x0
.debug_line 867 0x0
Total 14570
```
> A refresher on ELF linker sections
>
> - `.text` contains the program instructions
> - `.rodata` contains constant values like strings
> - `.data` contains statically allocated variables whose initial values are
> *not* zero
> - `.bss` also contains statically allocated variables whose initial values
> *are* zero
> - `.vector_table` is a *non*-standard section that we use to store the vector
> (interrupt) table
> - `.ARM.attributes` and the `.debug_*` sections contain metadata and will
> *not* be loaded onto the target when flashing the binary.
**IMPORTANT**: ELF files contain metadata like debug information so their *size on disk* does *not* accurately reflect the space the program will occupy when flashed on a device. *Always* use `cargo-size` to check how big a binary really is.
`cargo-objdump` can be used to disassemble the binary.
```console
cargo objdump --bin app --release -- -disassemble -no-show-raw-insn -print-imm-hex
```
> **NOTE** this output can differ on your system. New versions of rustc, LLVM and libraries can generate different assembly. We truncated some of the instructions to keep the snippet small.
```text
app: file format ELF32-arm-little
Disassembly of section .text:
main:
400: bl #0x256
404: b #-0x4 <main+0x4>
Reset:
406: bl #0x24e
40a: movw r0, #0x0
< .. truncated any more instructions .. >
DefaultHandler_:
656: b #-0x4 <DefaultHandler_>
UsageFault:
657: strb r7, [r4, #0x3]
DefaultPreInit:
658: bx lr
__pre_init:
659: strb r7, [r0, #0x1]
__nop:
65a: bx lr
HardFaultTrampoline:
65c: mrs r0, msp
660: b #-0x2 <HardFault_>
HardFault_:
662: b #-0x4 <HardFault_>
HardFault:
663: <unknown>
```
## Running
Next, let's see how to run an embedded program on QEMU! This time we'll use the `hello` example which actually does something.
For convenience here's the source code of `examples/hello.rs`:
```rust , ignore
//! Prints "Hello, world!" on the host console using semihosting
#![no_main]
#![no_std]
extern crate panic_halt;
use cortex_m_rt::entry;
use cortex_m_semihosting::{debug, hprintln};
#[entry]
fn main() -> ! {
hprintln!("Hello, world!").unwrap();
// exit QEMU
// NOTE do not run this on hardware; it can corrupt OpenOCD state
debug::exit(debug::EXIT_SUCCESS);
loop {}
}
```
This program uses something called semihosting to print text to the *host* console. When using real hardware this requires a debug session but when using QEMU this Just Works.
Let's start by compiling the example:
```console
cargo build --example hello
```
The output binary will be located at `target/thumbv7m-none-eabi/debug/examples/hello`.
To run this binary on QEMU run the following command:
```console
qemu-system-arm \
-cpu cortex-m3 \
-machine lm3s6965evb \
-nographic \
-semihosting-config enable=on,target=native \
-kernel target/thumbv7m-none-eabi/debug/examples/hello
```
```text
Hello, world!
```
The command should successfully exit (exit code = 0) after printing the text. On *nix you can check that with the following command:
```console
echo $?
```
```text
0
```
Let's break down that QEMU command:
- `qemu-system-arm`. This is the QEMU emulator. There are a few variants of these QEMU binaries; this one does full *system* emulation of *ARM* machines hence the name.
- `-cpu cortex-m3`. This tells QEMU to emulate a Cortex-M3 CPU. Specifying the CPU model lets us catch some miscompilation errors: for example, running a program compiled for the Cortex-M4F, which has a hardware FPU, will make QEMU error during its execution.
- `-machine lm3s6965evb`. This tells QEMU to emulate the LM3S6965EVB, a evaluation board that contains a LM3S6965 microcontroller.
- `-nographic`. This tells QEMU to not launch its GUI.
- `-semihosting-config (..)`. This tells QEMU to enable semihosting. Semihosting lets the emulated device, among other things, use the host stdout, stderr and stdin and create files on the host.
- `-kernel $file`. This tells QEMU which binary to load and run on the emulated machine.
Typing out that long QEMU command is too much work! We can set a custom runner to simplify the process. `.cargo/config` has a commented out runner that invokes QEMU; let's uncomment it:
```console
head -n3 .cargo/config
```
```toml
[target.thumbv7m-none-eabi]
# uncomment this to make `cargo run` execute programs on QEMU
runner = "qemu-system-arm -cpu cortex-m3 -machine lm3s6965evb -nographic -semihosting-config enable=on,target=native -kernel"
```
This runner only applies to the `thumbv7m-none-eabi` target, which is our default compilation target. Now `cargo run` will compile the program and run it on QEMU:
```console
cargo run --example hello --release
```
```text
Compiling app v0.1.0 (file:///tmp/app)
Finished release [optimized + debuginfo] target(s) in 0.26s
Running `qemu-system-arm -cpu cortex-m3 -machine lm3s6965evb -nographic -semihosting-config enable=on,target=native -kernel target/thumbv7m-none-eabi/release/examples/hello`
Hello, world!
```
## Debugging
Debugging is critical to embedded development. Let's see how it's done.
Debugging an embedded device involves *remote* debugging as the program that we want to debug won't be running on the machine that's running the debugger program (GDB or LLDB).
Remote debugging involves a client and a server. In a QEMU setup, the client will be a GDB (or LLDB) process and the server will be the QEMU process that's also running the embedded program.
In this section we'll use the `hello` example we already compiled.
The first debugging step is to launch QEMU in debugging mode:
```console
qemu-system-arm \
-cpu cortex-m3 \
-machine lm3s6965evb \
-nographic \
-semihosting-config enable=on,target=native \
-gdb tcp::3333 \
-S \
-kernel target/thumbv7m-none-eabi/debug/examples/hello
```
This command won't print anything to the console and will block the terminal. We have passed two extra flags this time:
- `-gdb tcp::3333`. This tells QEMU to wait for a GDB connection on TCP port 3333.
- `-S`. This tells QEMU to freeze the machine at startup. Without this the program would have reached the end of main before we had a chance to launch the debugger!
Next we launch GDB in another terminal and tell it to load the debug symbols of the example:
```console
gdb-multiarch -q target/thumbv7m-none-eabi/debug/examples/hello
```
**NOTE**: you might need another version of gdb instead of `gdb-multiarch` depending on which one you installed in the installation chapter. This could also be `arm-none-eabi-gdb` or just `gdb`.
Then within the GDB shell we connect to QEMU, which is waiting for a connection on TCP port 3333.
```console
target remote :3333
```
```text
Remote debugging using :3333
Reset () at $REGISTRY/cortex-m-rt-0.6.1/src/lib.rs:473
473 pub unsafe extern "C" fn Reset() -> ! {
```
You'll see that the process is halted and that the program counter is pointing to a function named `Reset`. That is the reset handler: what Cortex-M cores execute upon booting.
This reset handler will eventually call our main function. Let's skip all the way there using a breakpoint and the `continue` command:
```console
break main
```
```text
Breakpoint 1 at 0x400: file examples/panic.rs, line 29.
```
```console
continue
```
```text
Continuing.
Breakpoint 1, main () at examples/hello.rs:17
17 let mut stdout = hio::hstdout().unwrap();
```
We are now close to the code that prints "Hello, world!". Let's move forward using the `next` command.
``` console
next
```
```text
18 writeln!(stdout, "Hello, world!").unwrap();
```
```console
next
```
```text
20 debug::exit(debug::EXIT_SUCCESS);
```
At this point you should see "Hello, world!" printed on the terminal that's running `qemu-system-arm`.
```text
$ qemu-system-arm (..)
Hello, world!
```
Calling `next` again will terminate the QEMU process.
```console
next
```
```text
[Inferior 1 (Remote target) exited normally]
```
You can now exit the GDB session.
``` console
quit
```

View File

@ -1,27 +1,30 @@
# Memory Mapped Registers
# 内存映射寄存器
Embedded systems can only get so far by executing normal Rust code and moving data around in RAM. If we want to get any information into or out of our system (be that blinking an LED, detecting a button press or communicating with an off-chip peripheral on some sort of bus) we're going to have to dip into the world of Peripherals and their 'memory mapped registers'.
就目前我们所知,嵌入式系统只能执行常规的Rust代码,操作内存中的数据(todo: 什么样的系统不是这样?)。如果我们想获取或者修改系统的任何信息(例如闪烁LED检测到按钮的按下或与某种总线上的外设进行通信),我们将不得不深入了解外设及其“内存映射寄存器”。
You may well find that the code you need to access the peripherals in your micro-controller has already been written, at one of the following levels:
现在已经存在不少问外设的crate,他们可以大致进行如下分类:
* Micro-architecture Crate - This sort of crate handles any useful routines common to the processor core your microcontroller is using, as well as any peripherals that are common to all micro-controllers that use that particular type of processor core. For example the [cortex-m] crate gives you functions to enable and disable interrupts, which are the same for all Cortex-M based micro-controllers. It also gives you access to the 'SysTick' peripheral included with all Cortex-M based micro-controllers.
* Peripheral Access Crate (PAC) - This sort of crate is a thin wrapper over the various memory-wrapper registers defined for your particular part-number of micro-controller you are using. For example, [tm4c123x] for the Texas Instruments Tiva-C TM4C123 series, or [stm32f30x] for the ST-Micro STM32F30x series. Here, you'll be interacting with the registers directly, following each peripheral's operating instructions given in your micro-controller's Technical Reference Manual.
* HAL Crate - These crates offer a more user-friendly API for your particular processor, often by implementing some common traits defined in [embedded-hal]. For example, this crate might offer a `Serial` struct, with a constructor that takes an appropriate set of GPIO pins and a baud rate, and offers some sort of `write_byte` function for sending data. See the chapter on [Portability] for more information on [embedded-hal].
* Board Crate - These crates go one step further than a HAL Crate by pre-configuring various peripherals and GPIO pins to suit the specific developer kit or board you are using, such as [F3] for the STM32F3DISCOVERY board.
* 处理器架构相关Crate (Micro-architecture Crate) - 这种crate比较通用, 可处理CPU相关的通用例程以及一些通用外设。例如[cortex-m]crate为您提供了启用和禁用中断的功能这些功能对于所有基于Cortex-M的CPU都是相同的。它还使您可以访问所有基于Cortex-M的微控制器附带的时钟外设(SysTick)。
[cortex-m]: https://crates.io/crates/cortex-m
[tm4c123x]: https://crates.io/crates/tm4c123x
[stm32f30x]: https://crates.io/crates/stm32f30x
[embedded-hal]: https://crates.io/crates/embedded-hal
[Portability]: ../portability/index.md
[F3]: https://crates.io/crates/f3
* 外设相关Crate(PAC)-这种crate实际上是对特定CPU型号的内存映射寄存器的一个简单封装。例如[tm4c123x]这个crate是对德州仪器(TI)Tiva-C TM4C123系列CPU的封装[stm32f30x]这个crate是对ST-Micro STM32F30x系列CPU的封装。借助他们您可以按照CPU参考手册中给出的每个外设的操作说明直接与寄存器进行交互。
* HAL crate - 这些crate通过实现[embedded-hal]中定义的一些常见Trait来提供更友好的处理器相关API。例如此crate可能提供一个`Serial`结构体该结构体提供一个构造函数来配置一组GPIO引脚和波特率并提供某种`write_byte`函数来发送数据。有关[embedded-hal]的更多信息,请参见[可移植性]一章。
*开发板相关crate - 通过预先配置各种外设和GPIO引脚以适合特定的开发板例如针对TM32F3DISCOVERY开发板的[F3]crate这些crate相比HAL类crate,更易用。
[cortex-m]:https//crates.io/crates/cortex-m
[tm4c123x]:https://crates.io/crates/tm4c123x
[stm32f30x]:https://crates.io/crates/stm32f30x
[embedded-hal]:https://crates.io/crates/embedded-hal
[可移植性]:../portability/index.md
[F3]:https://crates.io/crates/f3
## Starting at the bottom
## 从底层开始
Let's look at the SysTick peripheral that's common to all Cortex-M based micro-controllers. We can find a pretty low-level API in the [cortex-m] crate, and we can use it like this:
让我们看一下所有基于Cortex-M的微控制器共有的SysTick外设。我们可以在[cortex-m]crate中找到一个相当低级的API我们可以像这样使用它
```rust,ignore
```rust , ignore
use cortex_m::peripheral::{syst, Peripherals};
use cortex_m_rt::entry;
@ -41,15 +44,15 @@ fn main() -> ! {
}
```
The functions on the `SYST` struct map pretty closely to the functionality defined by the ARM Technical Reference Manual for this peripheral. There's nothing in this API about 'delaying for X milliseconds' - we have to crudely implement that ourselves using a `while` loop. Note that we can't access our `SYST` struct until we have called `Peripherals::take()` - this is a special routine that guarantees that there is only one `SYST` structure in our entire program. For more on that, see the [Peripherals] section.
SYST结构上的函数接口与ARM技术参考手册为此外围设备定义的功能非常接近。这个API中没有“延迟X毫秒”这样的函数接口,因此我们必须使用`while`循环来实现它。注意,在调用 `Peripherals::take()`之前,我们无法访问`SYST`结构体-这可确保整个程序中只有一个`SYST`实例。有关更多信息,请参见[外围设备]部分。
[Peripherals]: ../peripherals/index.md
[外围设备]:../peripherals/index.md
## Using a Peripheral Access Crate (PAC)
## 使用外设crate(PAC)
We won't get very far with our embedded software development if we restrict ourselves to only the basic peripherals included with every Cortex-M. At some point, we're going to need to write some code that's specific to the particular micro-controller we're using. In this example, let's assume we have an Texas Instruments TM4C123 - a middling 80MHz Cortex-M4 with 256 KiB of Flash. We're going to pull in the [tm4c123x] crate to make use of this chip.
如果我们将自己限制在每个Cortex-M附带的基本外围设备上那么我们在嵌入式软件开发方面就不会走得太远。总有一天我们需要编写一些特定于我们正在使用的特定微控制器的代码。在此示例中假设我们使用德州仪器(TI)TM4C123这款款处理器(具有256 KiB Flash,80MHz的Cortex-M4)。我们需要[tm4c123x]这个crate以使用此芯片。
```rust,ignore
```rust , ignore
#![no_std]
#![no_main]
@ -76,35 +79,36 @@ pub fn init() -> (Delay, Leds) {
```
We've accessed the `PWM0` peripheral in exactly the same way as we accessed the `SYST` peripheral earlier, except we called `tm4c123x::Peripherals::take()`. As this crate was auto-generated using [svd2rust], the access functions for our register fields take a closure, rather than a numeric argument. While this looks like a lot of code, the Rust compiler can use it to perform a bunch of checks for us, but then generate machine-code which is pretty close to hand-written assembler! Where the auto-generated code isn't able to determine that all possible arguments to a particular accessor function are valid (for example, if the SVD defines the register as 32-bit but doesn't say if some of those 32-bit values have a special meaning), then the function is marked as `unsafe`. We can see this in the example above when setting the `load` and `compa` sub-fields using the `bits()` function.
除了调用`tm4c123x::Peripherals::take()`之外我们访问PWM0外设的方式与之前访问SYST外设的方式完全相同。由于此crate是使用[svd2rust]自动生成的因此我们寄存器的访问函数采用闭包而不是数字参数。尽管这看起来有很多代码但是Rust编译器可以为我们执行一堆检查以及优化然后生成与手写汇编代码非常接近的机器代码自动生成的代码如果无法确定特定访问器函数的参数的所有可能值均有效(例如SVD将寄存器定义为32位整数但实际上只有其中的某些值才有特殊含义,才有意义),则该函数被标记为“不安全”。我们在上面的示例中使用`bits()` 函数设置 `load` 和`compa` 子字段时可以看到这一点。
### Reading
### 读访问
The `read()` function returns an object which gives read-only access to the various sub-fields within this register, as defined by the manufacturer's SVD file for this chip. You can find all the functions available on special `R` return type for this particular register, in this particular peripheral, on this particular chip, in the [tm4c123x documentation][tm4c123x documentation R].
`read()` 函数返回一个对象R该对象只有对该寄存器中各个子字段的只读访问权限这些权限由制造商的该芯片的SVD文件定义。R上面定义的所有函数功能,您可以在[tm4c123x文档] [tm4c123x文档R]中找到针对此款处理器,此种外设的具体寄存器的定义。
```rust,ignore
```rust , ignore
if pwm.ctl.read().globalsync0().is_set() {
// Do a thing
}
```
### Writing
### 写访问
The `write()` function takes a closure with a single argument. Typically we call this `w`. This argument then gives read-write access to the various sub-fields within this register, as defined by the manufacturer's SVD file for this chip. Again, you can find all the functions available on the 'w' for this particular register, in this particular peripheral, on this particular chip, in the [tm4c123x documentation][tm4c123x Documentation W]. Note that all of the sub-fields that we do not set will be set to a default value for us - any existing content in the register will be lost.
`write()`函数采用一个带有单个参数的闭包。通常我们将其称为 `w`。根据制造商关于此芯片的SVD文件此参数可对该寄存器内的各个子字段进行读写访问。同样`w`上面定义 所有函数功能,您可以在[tm4c123x文档] [tm4c123x文档W]中找到针对此处理器,此外设的具体寄存器的定义。请注意,我们未设置的所有子字段都将被设置默认值-寄存器中的任何现有内容都将丢失。
```rust,ignore
```rust , ignore
pwm.ctl.write(|w| w.globalsync0().clear_bit());
```
### Modifying
### 修改
If we wish to change only one particular sub-field in this register and leave the other sub-fields unchanged, we can use the `modify` function. This function takes a closure with two arguments - one for reading and one for writing. Typically we call these `r` and `w` respectively. The `r` argument can be used to inspect the current contents of the register, and the `w` argument can be used to modify the register contents.
如果我们只想更改该寄存器中的一个特定子字段,而使其他子字段保持不变,则可以使用`modify`函数。此函数采用带有两个参数的闭包-一个用于读取,一个用于写入。通常,我们分别将它们称为`r`和`w`。 r参数可用于读取寄存器的当前内容w参数可用于修改寄存器的内容。
```rust,ignore
```rust , ignore
pwm.ctl.modify(|r, w| w.globalsync0().clear_bit());
```
The `modify` function really shows the power of closures here. In C, we'd have to read into some temporary value, modify the correct bits and then write the value back. This means there's considerable scope for error:
`modify`函数在这里显示了闭包的强大。在C语言中我们必须读入到一些临时值修改特定位上的值然后将其写回。这意味着不小的出错几率
```C
uint32_t temp = pwm0.ctl.read();
@ -115,17 +119,17 @@ temp2 |= PWM0_ENABLE_PWM4EN;
pwm0.enable.write(temp); // Uh oh! Wrong variable!
```
[svd2rust]: https://crates.io/crates/svd2rust
[tm4c123x documentation R]: https://docs.rs/tm4c123x/0.7.0/tm4c123x/pwm0/ctl/struct.R.html
[tm4c123x documentation W]: https://docs.rs/tm4c123x/0.7.0/tm4c123x/pwm0/ctl/struct.W.html
[svd2rust]:https://crates.io/crates/svd2rust
[tm4c123x文档R]:https://docs.rs/tm4c123x/0.7.0/tm4c123x/pwm0/ctl/struct.R.html
[tm4c123x文档W]:https://docs.rs/tm4c123x/0.7.0/tm4c123x/pwm0/ctl/struct.W.html
## Using a HAL crate
## 使用HAL crate
The HAL crate for a chip typically works by implementing a custom Trait for the raw structures exposed by the PAC. Often this trait will define a function called `constrain()` for single peripherals or `split()` for things like GPIO ports with multiple pins. This function will consume the underlying raw peripheral structure and return a new object with a higher-level API. This API may also do things like have the Serial port `new` function require a borrow on some `Clock` structure, which can only be generated by calling the function which configures the PLLs and sets up all the clock frequencies. In this way, it is statically impossible to create a Serial port object without first having configured the clock rates, or for the Serial port object to mis-convert the baud rate into clock ticks. Some crates even define special traits for the states each GPIO pin can be in, requiring the user to put a pin into the correct state (say, by selecting the appropriate Alternate Function Mode) before passing the pin into Peripheral. All with no run-time cost!
具体芯片的HAL crate一般是通过为PAC crate导出的结构体实现自定义Trait来工作。通常这个自定义crate为单体外设定义一个名为 `constrain()` 的函数为具有多个引脚的GPIO端口之类的外设定义 `split()` 函数。该函数将消耗底层的原始外围设备结构并返回具有更高级别API的新对象。这个API可能还会做一些事情例如让串口`new`函数需要`Clock`结构体的借用这个Clock结构体只能通过调用特定函数来生成,而这个函数会配置PLL并设置时钟频率。这样在没有先配置时钟频率的情况下就不可能创建串口对象, 否则串口对象有可能将波特率误转换为错误的时钟滴答。一些crate甚至为每个GPIO引脚可以处于的状态定义了特殊的Trait要求用户在将引脚传递到外设之前将其置于正确的状态(例如,通过选择适当的可选功能模式)。更重要的是,这些都是零成本抽象!
Let's see an example:
让我们来看一个例子:
```rust,ignore
```rust , ignore
#![no_std]
#![no_main]

188
src/start/registers_en.md Normal file
View File

@ -0,0 +1,188 @@
# Memory Mapped Registers
Embedded systems can only get so far by executing normal Rust code and moving data around in RAM. If we want to get any information into or out of our system (be that blinking an LED, detecting a button press or communicating with an off-chip peripheral on some sort of bus) we're going to have to dip into the world of Peripherals and their 'memory mapped registers'.
You may well find that the code you need to access the peripherals in your micro-controller has already been written, at one of the following levels:
* Micro-architecture Crate - This sort of crate handles any useful routines common to the processor core your microcontroller is using, as well as any peripherals that are common to all micro-controllers that use that particular type of processor core. For example the [cortex-m] crate gives you functions to enable and disable interrupts, which are the same for all Cortex-M based micro-controllers. It also gives you access to the 'SysTick' peripheral included with all Cortex-M based micro-controllers.
* Peripheral Access Crate (PAC) - This sort of crate is a thin wrapper over the various memory-wrapper registers defined for your particular part-number of micro-controller you are using. For example, [tm4c123x] for the Texas Instruments Tiva-C TM4C123 series, or [stm32f30x] for the ST-Micro STM32F30x series. Here, you'll be interacting with the registers directly, following each peripheral's operating instructions given in your micro-controller's Technical Reference Manual.
* HAL Crate - These crates offer a more user-friendly API for your particular processor, often by implementing some common traits defined in [embedded-hal]. For example, this crate might offer a `Serial` struct, with a constructor that takes an appropriate set of GPIO pins and a baud rate, and offers some sort of `write_byte` function for sending data. See the chapter on [Portability] for more information on [embedded-hal].
* Board Crate - These crates go one step further than a HAL Crate by pre-configuring various peripherals and GPIO pins to suit the specific developer kit or board you are using, such as [F3] for the STM32F3DISCOVERY board.
[cortex-m]: https://crates.io/crates/cortex-m
[tm4c123x]: https://crates.io/crates/tm4c123x
[stm32f30x]: https://crates.io/crates/stm32f30x
[embedded-hal]: https://crates.io/crates/embedded-hal
[Portability]: ../portability/index.md
[F3]: https://crates.io/crates/f3
## Starting at the bottom
Let's look at the SysTick peripheral that's common to all Cortex-M based micro-controllers. We can find a pretty low-level API in the [cortex-m] crate, and we can use it like this:
```rust , ignore
use cortex_m::peripheral::{syst, Peripherals};
use cortex_m_rt::entry;
#[entry]
fn main() -> ! {
let mut peripherals = Peripherals::take().unwrap();
let mut systick = peripherals.SYST;
systick.set_clock_source(syst::SystClkSource::Core);
systick.set_reload(1_000);
systick.clear_current();
systick.enable_counter();
while !systick.has_wrapped() {
// Loop
}
loop {}
}
```
The functions on the `SYST` struct map pretty closely to the functionality defined by the ARM Technical Reference Manual for this peripheral. There's nothing in this API about 'delaying for X milliseconds' - we have to crudely implement that ourselves using a `while` loop. Note that we can't access our `SYST` struct until we have called `Peripherals::take()` - this is a special routine that guarantees that there is only one `SYST` structure in our entire program. For more on that, see the [Peripherals] section.
[Peripherals]: ../peripherals/index.md
## Using a Peripheral Access Crate (PAC)
We won't get very far with our embedded software development if we restrict ourselves to only the basic peripherals included with every Cortex-M. At some point, we're going to need to write some code that's specific to the particular micro-controller we're using. In this example, let's assume we have an Texas Instruments TM4C123 - a middling 80MHz Cortex-M4 with 256 KiB of Flash. We're going to pull in the [tm4c123x] crate to make use of this chip.
```rust , ignore
#![no_std]
#![no_main]
extern crate panic_halt; // panic handler
use cortex_m_rt::entry;
use tm4c123x;
#[entry]
pub fn init() -> (Delay, Leds) {
let cp = cortex_m::Peripherals::take().unwrap();
let p = tm4c123x::Peripherals::take().unwrap();
let pwm = p.PWM0;
pwm.ctl.write(|w| w.globalsync0().clear_bit());
// Mode = 1 => Count up/down mode
pwm._2_ctl.write(|w| w.enable().set_bit().mode().set_bit());
pwm._2_gena.write(|w| w.actcmpau().zero().actcmpad().one());
// 528 cycles (264 up and down) = 4 loops per video line (2112 cycles)
pwm._2_load.write(|w| unsafe { w.load().bits(263) });
pwm._2_cmpa.write(|w| unsafe { w.compa().bits(64) });
pwm.enable.write(|w| w.pwm4en().set_bit());
}
```
We've accessed the `PWM0` peripheral in exactly the same way as we accessed the `SYST` peripheral earlier, except we called `tm4c123x::Peripherals::take()`. As this crate was auto-generated using [svd2rust], the access functions for our register fields take a closure, rather than a numeric argument. While this looks like a lot of code, the Rust compiler can use it to perform a bunch of checks for us, but then generate machine-code which is pretty close to hand-written assembler! Where the auto-generated code isn't able to determine that all possible arguments to a particular accessor function are valid (for example, if the SVD defines the register as 32-bit but doesn't say if some of those 32-bit values have a special meaning), then the function is marked as `unsafe`. We can see this in the example above when setting the `load` and `compa` sub-fields using the `bits()` function.
### Reading
The `read()` function returns an object which gives read-only access to the various sub-fields within this register, as defined by the manufacturer's SVD file for this chip. You can find all the functions available on special `R` return type for this particular register, in this particular peripheral, on this particular chip, in the [tm4c123x documentation][tm4c123x documentation R].
```rust , ignore
if pwm.ctl.read().globalsync0().is_set() {
// Do a thing
}
```
### Writing
The `write()` function takes a closure with a single argument. Typically we call this `w`. This argument then gives read-write access to the various sub-fields within this register, as defined by the manufacturer's SVD file for this chip. Again, you can find all the functions available on the 'w' for this particular register, in this particular peripheral, on this particular chip, in the [tm4c123x documentation][tm4c123x Documentation W]. Note that all of the sub-fields that we do not set will be set to a default value for us - any existing content in the register will be lost.
```rust , ignore
pwm.ctl.write(|w| w.globalsync0().clear_bit());
```
### Modifying
If we wish to change only one particular sub-field in this register and leave the other sub-fields unchanged, we can use the `modify` function. This function takes a closure with two arguments - one for reading and one for writing. Typically we call these `r` and `w` respectively. The `r` argument can be used to inspect the current contents of the register, and the `w` argument can be used to modify the register contents.
```rust , ignore
pwm.ctl.modify(|r, w| w.globalsync0().clear_bit());
```
The `modify` function really shows the power of closures here. In C, we'd have to read into some temporary value, modify the correct bits and then write the value back. This means there's considerable scope for error:
```C
uint32_t temp = pwm0.ctl.read();
temp |= PWM0_CTL_GLOBALSYNC0;
pwm0.ctl.write(temp);
uint32_t temp2 = pwm0.enable.read();
temp2 |= PWM0_ENABLE_PWM4EN;
pwm0.enable.write(temp); // Uh oh! Wrong variable!
```
[svd2rust]: https://crates.io/crates/svd2rust
[tm4c123x documentation R]: https://docs.rs/tm4c123x/0.7.0/tm4c123x/pwm0/ctl/struct.R.html
[tm4c123x documentation W]: https://docs.rs/tm4c123x/0.7.0/tm4c123x/pwm0/ctl/struct.W.html
## Using a HAL crate
The HAL crate for a chip typically works by implementing a custom Trait for the raw structures exposed by the PAC. Often this trait will define a function called `constrain()` for single peripherals or `split()` for things like GPIO ports with multiple pins. This function will consume the underlying raw peripheral structure and return a new object with a higher-level API. This API may also do things like have the Serial port `new` function require a borrow on some `Clock` structure, which can only be generated by calling the function which configures the PLLs and sets up all the clock frequencies. In this way, it is statically impossible to create a Serial port object without first having configured the clock rates, or for the Serial port object to mis-convert the baud rate into clock ticks. Some crates even define special traits for the states each GPIO pin can be in, requiring the user to put a pin into the correct state (say, by selecting the appropriate Alternate Function Mode) before passing the pin into Peripheral. All with no run-time cost!
Let's see an example:
```rust , ignore
#![no_std]
#![no_main]
extern crate panic_halt; // panic handler
use cortex_m_rt::entry;
use tm4c123x_hal as hal;
use tm4c123x_hal::prelude::*;
use tm4c123x_hal::serial::{NewlineMode, Serial};
use tm4c123x_hal::sysctl;
#[entry]
fn main() -> ! {
let p = hal::Peripherals::take().unwrap();
let cp = hal::CorePeripherals::take().unwrap();
// Wrap up the SYSCTL struct into an object with a higher-layer API
let mut sc = p.SYSCTL.constrain();
// Pick our oscillation settings
sc.clock_setup.oscillator = sysctl::Oscillator::Main(
sysctl::CrystalFrequency::_16mhz,
sysctl::SystemClock::UsePll(sysctl::PllOutputFrequency::_80_00mhz),
);
// Configure the PLL with those settings
let clocks = sc.clock_setup.freeze();
// Wrap up the GPIO_PORTA struct into an object with a higher-layer API.
// Note it needs to borrow `sc.power_control` so it can power up the GPIO
// peripheral automatically.
let mut porta = p.GPIO_PORTA.split(&sc.power_control);
// Activate the UART.
let uart = Serial::uart0(
p.UART0,
// The transmit pin
porta
.pa1
.into_af_push_pull::<hal::gpio::AF1>(&mut porta.control),
// The receive pin
porta
.pa0
.into_af_push_pull::<hal::gpio::AF1>(&mut porta.control),
// No RTS or CTS required
(),
(),
// The baud rate
115200_u32.bps(),
// Output handling
NewlineMode::SwapLFtoCRLF,
// We need the clock rates to calculate the baud rate divisors
&clocks,
// We need this to power up the UART peripheral
&sc.power_control,
);
loop {
writeln!(uart, "Hello, World!\r\n").unwrap();
}
}
```

View File

@ -1,18 +1,12 @@
# Semihosting
# 半主机
Semihosting is a mechanism that lets embedded devices do I/O on the host and is
mainly used to log messages to the host console. Semihosting requires a debug
session and pretty much nothing else (no extra wires!) so it's super convenient
to use. The downside is that it's super slow: each write operation can take
several milliseconds depending on the hardware debugger (e.g. ST-Link) you use.
半主机是这样一种机制它允许嵌入式设备在主机上执行I/O操作主要用于将消息记录到主机控制台。半主机除了需要调试会话之外几乎不需要其他任何操作(不需要额外的接线!),因此使用起来超级方便。缺点是它非常慢:根据您使用的硬件调试器不同(例如ST-Link),每个写入操作可能要花费几毫秒。
The [`cortex-m-semihosting`] crate provides an API to do semihosting operations
on Cortex-M devices. The program below is the semihosting version of "Hello,
world!":
[`cortex-m-semihosting`]crate提供了一个API可以在Cortex-M设备上进行半主机操作。下面的程序是“ Helloworld”的半主机版本
[`cortex-m-semihosting`]: https://crates.io/crates/cortex-m-semihosting
[`cortex-m-semihosting`]:https://crates.io/crates/cortex-m-semihosting
```rust,ignore
```rust , ignore
#![no_main]
#![no_std]
@ -29,8 +23,7 @@ fn main() -> ! {
}
```
If you run this program on hardware you'll see the "Hello, world!" message
within the OpenOCD logs.
如果您在硬件上运行此程序则会在OpenOCD日志中看到"Hello, world!" 消息。
``` console
$ openocd
@ -39,17 +32,14 @@ Hello, world!
(..)
```
You do need to enable semihosting in OpenOCD from GDB first:
您需要先从GDB中启用OpenOCD的半主机
``` console
(gdb) monitor arm semihosting enable
semihosting is enabled
```
QEMU understands semihosting operations so the above program will also work with
`qemu-system-arm` without having to start a debug session. Note that you'll
need to pass the `-semihosting-config` flag to QEMU to enable semihosting
support; these flags are already included in the `.cargo/config` file of the
template.
QEMU能够理解半主机操作因此上述程序也可以与`qemu-system-arm`一起使用,而无需启动调试会话。注意,您需要将`-semihosting-config`参数传递给QEMU以启用半主机支持。这些参数已经包含在模板的`cargo/config`文件中。
``` console
$ # this program will block the terminal
@ -58,12 +48,10 @@ $ cargo run
Hello, world!
```
There's also an `exit` semihosting operation that can be used to terminate the
QEMU process. Important: do **not** use `debug::exit` on hardware; this function
can corrupt your OpenOCD session and you will not be able to debug more programs
until you restart it.
还有一个`exit`半主机操作可用于终止QEMU进程。重要提示不要在硬件上使用`debug::exit`此功能可能会破坏您的OpenOCD会话并且只有重新启动它才能调试更多程序。
```rust,ignore
```rust , ignore
#![no_main]
#![no_std]
@ -94,14 +82,11 @@ $ echo $?
1
```
One last tip: you can set the panicking behavior to `exit(EXIT_FAILURE)`. This
will let you write `no_std` run-pass tests that you can run on QEMU.
最后一个提示:您可以将恐慌行为设置为`exit(EXIT_FAILURE)`。这将使您编写可以在QEMU上运行的`no_std`测试案例。
For convenience, the `panic-semihosting` crate has an "exit" feature that when
enabled invokes `exit(EXIT_FAILURE)` after logging the panic message to the host
stderr.
为方便起见,`panic-semihosting`crate具有“退出”功能启用后会将panic消息记录到主机stderr后调用`exit(EXIT_FAILURE)`。
```rust,ignore
```rust , ignore
#![no_main]
#![no_std]
@ -131,15 +116,12 @@ $ echo $?
1
```
**NOTE**: To enable this feature on `panic-semihosting`, edit your
`Cargo.toml` dependencies section where `panic-semihosting` is specified with:
**注意**:要在`panic-semihosting`上启用此功能,请在您的`Cargo.toml`依赖项部分中编辑`panic-semihosting`
``` toml
panic-semihosting = { version = "VERSION", features = ["exit"] }
```
where `VERSION` is the version desired. For more information on dependencies
features check the [`specifying dependencies`] section of the Cargo book.
其中`VERSION` 是所需的版本。有关依赖项功能的更多信息请参阅《Cargo手册》中的[`specifying dependencies`]部分。
[`specifying dependencies`]:
https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html
[`specifying dependencies`]:https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html

127
src/start/semihosting_en.md Normal file
View File

@ -0,0 +1,127 @@
# Semihosting
Semihosting is a mechanism that lets embedded devices do I/O on the host and is mainly used to log messages to the host console. Semihosting requires a debug session and pretty much nothing else (no extra wires!) so it's super convenient to use. The downside is that it's super slow: each write operation can take several milliseconds depending on the hardware debugger (e.g. ST-Link) you use.
The [`cortex-m-semihosting`] crate provides an API to do semihosting operations on Cortex-M devices. The program below is the semihosting version of "Hello, world!":
[`cortex-m-semihosting`]: https://crates.io/crates/cortex-m-semihosting
```rust , ignore
#![no_main]
#![no_std]
extern crate panic_halt;
use cortex_m_rt::entry;
use cortex_m_semihosting::hprintln;
#[entry]
fn main() -> ! {
hprintln!("Hello, world!").unwrap();
loop {}
}
```
If you run this program on hardware you'll see the "Hello, world!" message within the OpenOCD logs.
``` console
$ openocd
(..)
Hello, world!
(..)
```
You do need to enable semihosting in OpenOCD from GDB first:
``` console
(gdb) monitor arm semihosting enable
semihosting is enabled
```
QEMU understands semihosting operations so the above program will also work with
`qemu-system-arm` without having to start a debug session. Note that you'll need to pass the `-semihosting-config` flag to QEMU to enable semihosting support; these flags are already included in the `.cargo/config` file of the template.
``` console
$ # this program will block the terminal
$ cargo run
Running `qemu-system-arm (..)
Hello, world!
```
There's also an `exit` semihosting operation that can be used to terminate the QEMU process. Important: do **not** use `debug::exit` on hardware; this function can corrupt your OpenOCD session and you will not be able to debug more programs until you restart it.
```rust , ignore
#![no_main]
#![no_std]
extern crate panic_halt;
use cortex_m_rt::entry;
use cortex_m_semihosting::debug;
#[entry]
fn main() -> ! {
let roses = "blue";
if roses == "red" {
debug::exit(debug::EXIT_SUCCESS);
} else {
debug::exit(debug::EXIT_FAILURE);
}
loop {}
}
```
``` console
$ cargo run
Running `qemu-system-arm (..)
$ echo $?
1
```
One last tip: you can set the panicking behavior to `exit(EXIT_FAILURE)`. This will let you write `no_std` run-pass tests that you can run on QEMU.
For convenience, the `panic-semihosting` crate has an "exit" feature that when enabled invokes `exit(EXIT_FAILURE)` after logging the panic message to the host stderr.
```rust , ignore
#![no_main]
#![no_std]
extern crate panic_semihosting; // features = ["exit"]
use cortex_m_rt::entry;
use cortex_m_semihosting::debug;
#[entry]
fn main() -> ! {
let roses = "blue";
assert_eq!(roses, "red");
loop {}
}
```
``` console
$ cargo run
Running `qemu-system-arm (..)
panicked at 'assertion failed: `(left == right)`
left: `"blue"`,
right: `"red"`', examples/hello.rs:15:5
$ echo $?
1
```
**NOTE**: To enable this feature on `panic-semihosting`, edit your `Cargo.toml` dependencies section where `panic-semihosting` is specified with:
``` toml
panic-semihosting = { version = "VERSION", features = ["exit"] }
```
where `VERSION` is the version desired. For more information on dependencies features check the [`specifying dependencies`] section of the Cargo book.
[`specifying dependencies`]:
https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html

View File

@ -1,24 +1,27 @@
# Design Contracts
# 设计合约
In our last chapter, we wrote an interface that *didn't* enforce design contracts. Let's take another look at our imaginary GPIO configuration register:
在上一章中,我们编写了一个接口,但是该接口**没有**严格执行设计合约。让我们再看一下我们假想的GPIO配置寄存器
| Name | Bit Number(s) | Value | Meaning | Notes |
| ---: | ------------: | ----: | ------: | ----: |
| enable | 0 | 0 | disabled | Disables the GPIO |
| | | 1 | enabled | Enables the GPIO |
| direction | 1 | 0 | input | Sets the direction to Input |
| | | 1 | output | Sets the direction to Output |
| input_mode | 2..3 | 00 | hi-z | Sets the input as high resistance |
| | | 01 | pull-low | Input pin is pulled low |
| | | 10 | pull-high | Input pin is pulled high |
| | | 11 | n/a | Invalid state. Do not set |
| output_mode | 4 | 0 | set-low | Output pin is driven low |
| | | 1 | set-high | Output pin is driven high |
| input_status | 5 | x | in-val | 0 if input is < 1.5v, 1 if input >= 1.5v |
|名字 |位号 | 值 |含义 |注意事项 |
| ---: |------------:| ----:|------: | ----: |
|启用| 0 | 0 | 禁用| 禁用GPIO |
| | | 1 |启用 |启用GPIO |
|方向| 1 | 0 |输入|将方向设置为输入|
| | | 1 |输出|将方向设置为输出|
| 输入模式 | 2..3 | 00 | 高电阻 |将输入设置为高阻|
| | | 01 |拉低|输入引脚被拉低|
| | | 10 |拉高|输入引脚被拉高|
| | | 11 |状态无效|不设|
| 输出模式 | 4 | 0 |低|引脚被驱动为低电平|
| | | 1 |高|输出引脚被驱动为高电平
| 输入状态 | 5 | x |输入值|如果输入<1.5v则为0如果输入> = 1.5v则为1 |
If we instead checked the state before making use of the underlying hardware, enforcing our design contracts at runtime, we might write code that looks like this instead:
```rust,ignore
如果我们改为在运行时检查是否遵循了设计合约,即在使用底层硬件之前检查状态,则我们可能会编写如下所示的代码:
```rust , ignore
/// GPIO interface
struct GpioConfig {
/// GPIO Configuration structure generated by svd2rust
@ -97,13 +100,13 @@ impl GpioConfig {
}
```
Because we need to enforce the restrictions on the hardware, we end up doing a lot of runtime checking which wastes time and resources, and this code will be much less pleasant for the developer to use.
因为我们需要遵循硬件的访问限制,所以最终需要进行大量的运行时检查,这一方面浪费了时间和资源,另一方面开发人员也会不太满意(这样的代码写起来乏味,用这个代码的人也容易出错)。
## Type States
## 类型状态机
But what if instead, we used Rust's type system to enforce the state transition rules? Take this example:
如果我们使用Rust的类型系统执行状态转换规则会是什么样子呢举个例子
```rust,ignore
```rust , ignore
/// GPIO interface
struct GpioConfig<ENABLED, DIRECTION, MODE> {
/// GPIO Configuration structure generated by svd2rust
@ -209,9 +212,9 @@ impl<IN_MODE> GpioConfig<Enabled, Input, IN_MODE> {
}
```
Now let's see what the code using this would look like:
现在,让我们看一下使用它的代码是什么样的:
```rust,ignore
```rust , ignore
/*
* Example 1: Unconfigured to High-Z input
*/
@ -245,10 +248,10 @@ output_pin.set_bit(true);
// output_pin.into_input_pull_down();
```
This is definitely a convenient way to store the state of the pin, but why do it this way? Why is this better than storing the state as an `enum` inside of our `GpioConfig` structure?
这绝对是存储引脚状态的便捷方法,但是为什么要这样做呢?为什么这比将状态作为`enum`存储在我们的 `GpioConfig`结构中更好?
## Compile Time Functional Safety
## 编译时功能安全
Because we are enforcing our design constraints entirely at compile time, this incurs no runtime cost. It is impossible to set an output mode when you have a pin in an input mode. Instead, you must walk through the states by converting it to an output pin, and then setting the output mode. Because of this, there is no runtime penalty due to checking the current state before executing a function.
因为我们在编译时完全强制执行设计约束,所以不会产生运行时成本。当引脚处于输入模式时,无法设置输出模式。相反,您必须通过将其转换为输出引脚。因此由于不必在执行功能之前检查当前状态,不会造成运行时间损失。
Also, because these states are enforced by the type system, there is no longer room for errors by consumers of this interface. If they try to perform an illegal state transition, the code will not compile!
同样,由于这些状态是由类型系统强制执行的,因此该接口的使用者不可能错误地使用。如果他们尝试执行非法的状态转换,则代码将无法编译!

View File

@ -0,0 +1,254 @@
# Design Contracts
In our last chapter, we wrote an interface that *didn't* enforce design contracts. Let's take another look at our imaginary GPIO configuration register:
| Name | Bit Number(s) | Value | Meaning | Notes |
| ---: | ------------: | ----: | ------: | ----: |
| enable | 0 | 0 | disabled | Disables the GPIO |
| | | 1 | enabled | Enables the GPIO |
| direction | 1 | 0 | input | Sets the direction to Input |
| | | 1 | output | Sets the direction to Output |
| input_mode | 2..3 | 00 | hi-z | Sets the input as high resistance |
| | | 01 | pull-low | Input pin is pulled low |
| | | 10 | pull-high | Input pin is pulled high |
| | | 11 | n/a | Invalid state. Do not set |
| output_mode | 4 | 0 | set-low | Output pin is driven low |
| | | 1 | set-high | Output pin is driven high |
| input_status | 5 | x | in-val | 0 if input is < 1.5v, 1 if input >= 1.5v |
If we instead checked the state before making use of the underlying hardware, enforcing our design contracts at runtime, we might write code that looks like this instead:
```rust , ignore
/// GPIO interface
struct GpioConfig {
/// GPIO Configuration structure generated by svd2rust
periph: GPIO_CONFIG,
}
impl GpioConfig {
pub fn set_enable(&mut self, is_enabled: bool) {
self.periph.modify(|_r, w| {
w.enable().set_bit(is_enabled)
});
}
pub fn set_direction(&mut self, is_output: bool) -> Result<(), ()> {
if self.periph.read().enable().bit_is_clear() {
// Must be enabled to set direction
return Err(());
}
self.periph.modify(|r, w| {
w.direction().set_bit(is_output)
});
Ok(())
}
pub fn set_input_mode(&mut self, variant: InputMode) -> Result<(), ()> {
if self.periph.read().enable().bit_is_clear() {
// Must be enabled to set input mode
return Err(());
}
if self.periph.read().direction().bit_is_set() {
// Direction must be input
return Err(());
}
self.periph.modify(|_r, w| {
w.input_mode().variant(variant)
});
Ok(())
}
pub fn set_output_status(&mut self, is_high: bool) -> Result<(), ()> {
if self.periph.read().enable().bit_is_clear() {
// Must be enabled to set output status
return Err(());
}
if self.periph.read().direction().bit_is_clear() {
// Direction must be output
return Err(());
}
self.periph.modify(|_r, w| {
w.output_mode.set_bit(is_high)
});
Ok(())
}
pub fn get_input_status(&self) -> Result<bool, ()> {
if self.periph.read().enable().bit_is_clear() {
// Must be enabled to get status
return Err(());
}
if self.periph.read().direction().bit_is_set() {
// Direction must be input
return Err(());
}
Ok(self.periph.read().input_status().bit_is_set())
}
}
```
Because we need to enforce the restrictions on the hardware, we end up doing a lot of runtime checking which wastes time and resources, and this code will be much less pleasant for the developer to use.
## Type States
But what if instead, we used Rust's type system to enforce the state transition rules? Take this example:
```rust , ignore
/// GPIO interface
struct GpioConfig<ENABLED, DIRECTION, MODE> {
/// GPIO Configuration structure generated by svd2rust
periph: GPIO_CONFIG,
enabled: ENABLED,
direction: DIRECTION,
mode: MODE,
}
// Type states for MODE in GpioConfig
struct Disabled;
struct Enabled;
struct Output;
struct Input;
struct PulledLow;
struct PulledHigh;
struct HighZ;
struct DontCare;
/// These functions may be used on any GPIO Pin
impl<EN, DIR, IN_MODE> GpioConfig<EN, DIR, IN_MODE> {
pub fn into_disabled(self) -> GpioConfig<Disabled, DontCare, DontCare> {
self.periph.modify(|_r, w| w.enable.disabled());
GpioConfig {
periph: self.periph,
enabled: Disabled,
direction: DontCare,
mode: DontCare,
}
}
pub fn into_enabled_input(self) -> GpioConfig<Enabled, Input, HighZ> {
self.periph.modify(|_r, w| {
w.enable.enabled()
.direction.input()
.input_mode.high_z()
});
GpioConfig {
periph: self.periph,
enabled: Enabled,
direction: Input,
mode: HighZ,
}
}
pub fn into_enabled_output(self) -> GpioConfig<Enabled, Output, DontCare> {
self.periph.modify(|_r, w| {
w.enable.enabled()
.direction.output()
.input_mode.set_high()
});
GpioConfig {
periph: self.periph,
enabled: Enabled,
direction: Output,
mode: DontCare,
}
}
}
/// This function may be used on an Output Pin
impl GpioConfig<Enabled, Output, DontCare> {
pub fn set_bit(&mut self, set_high: bool) {
self.periph.modify(|_r, w| w.output_mode.set_bit(set_high));
}
}
/// These methods may be used on any enabled input GPIO
impl<IN_MODE> GpioConfig<Enabled, Input, IN_MODE> {
pub fn bit_is_set(&self) -> bool {
self.periph.read().input_status.bit_is_set()
}
pub fn into_input_high_z(self) -> GpioConfig<Enabled, Input, HighZ> {
self.periph.modify(|_r, w| w.input_mode().high_z());
GpioConfig {
periph: self.periph,
enabled: Enabled,
direction: Input,
mode: HighZ,
}
}
pub fn into_input_pull_down(self) -> GpioConfig<Enabled, Input, PulledLow> {
self.periph.modify(|_r, w| w.input_mode().pull_low());
GpioConfig {
periph: self.periph,
enabled: Enabled,
direction: Input,
mode: PulledLow,
}
}
pub fn into_input_pull_up(self) -> GpioConfig<Enabled, Input, PulledHigh> {
self.periph.modify(|_r, w| w.input_mode().pull_high());
GpioConfig {
periph: self.periph,
enabled: Enabled,
direction: Input,
mode: PulledHigh,
}
}
}
```
Now let's see what the code using this would look like:
```rust , ignore
/*
* Example 1: Unconfigured to High-Z input
*/
let pin: GpioConfig<Disabled, _, _> = get_gpio();
// Can't do this, pin isn't enabled!
// pin.into_input_pull_down();
// Now turn the pin from unconfigured to a high-z input
let input_pin = pin.into_enabled_input();
// Read from the pin
let pin_state = input_pin.bit_is_set();
// Can't do this, input pins don't have this interface!
// input_pin.set_bit(true);
/*
* Example 2: High-Z input to Pulled Low input
*/
let pulled_low = input_pin.into_input_pull_down();
let pin_state = pulled_low.bit_is_set();
/*
* Example 3: Pulled Low input to Output, set high
*/
let output_pin = pulled_low.into_enabled_output();
output_pin.set_bit(true);
// Can't do this, output pins don't have this interface!
// output_pin.into_input_pull_down();
```
This is definitely a convenient way to store the state of the pin, but why do it this way? Why is this better than storing the state as an `enum` inside of our `GpioConfig` structure?
## Compile Time Functional Safety
Because we are enforcing our design constraints entirely at compile time, this incurs no runtime cost. It is impossible to set an output mode when you have a pin in an input mode. Instead, you must walk through the states by converting it to an output pin, and then setting the output mode. Because of this, there is no runtime penalty due to checking the current state before executing a function.
Also, because these states are enforced by the type system, there is no longer room for errors by consumers of this interface. If they try to perform an illegal state transition, the code will not compile!

View File

@ -1,23 +1,12 @@
# Static Guarantees
# 静态保证
Rust's type system prevents data races at compile time (see [`Send`] and
[`Sync`] traits). The type system can also be used to check other properties at
compile time; reducing the need for runtime checks in some cases.
Rust的类型系统在编译时就防止发生竞争访问(请参阅[`Send`]和[`Sync`]特性)。类型系统还可以用于在编译时检查其他属性。在某些情况下,减少了对运行时检查的需求。
[`Send`]: https://doc.rust-lang.org/core/marker/trait.Send.html
[`Sync`]: https://doc.rust-lang.org/core/marker/trait.Sync.html
[`Send`]:https://doc.rust-lang.org/core/marker/trait.Send.html
[`Sync`]:https://doc.rust-lang.org/core/marker/trait.Sync.html
When applied to embedded programs these *static checks* can be used, for
example, to enforce that configuration of I/O interfaces is done properly. For
instance, one can design an API where it is only possible to initialize a serial
interface by first configuring the pins that will be used by the interface.
这些**静态检查**在嵌入式程序中还可发挥特殊作用例如可以用来强制完成I/O接口的配置. 可以设计一种API只能先配置好串口所需引脚,然后才能初始化串口对象。
One can also statically check that operations, like setting a pin low, can only
be performed on correctly configured peripherals. For example, trying to change
the output state of a pin configured in floating input mode would raise a
compile error.
Rust还可以静态检查对外设的配置操作是否允许,例如正确配置后才能将引脚设置为低电平。例如,当引脚是浮动输入模式时,配置引脚的输出状态会产生编译错误。
And, as seen in the previous chapter, the concept of ownership can be applied
to peripherals to ensure that only certain parts of a program can modify a
peripheral. This *access control* makes software easier to reason about
compared to the alternative of treating peripherals as global mutable state.
而且,如上一章所述,所有权的概念可以应用于外围设备,以确保只有程序的某些部分才能修改外围设备。与将外围设备视为全局可变状态的方法相比,这种“访问控制”更加合理。

View File

@ -0,0 +1,12 @@
# Static Guarantees
Rust's type system prevents data races at compile time (see [`Send`] and[`Sync`] traits). The type system can also be used to check other properties at compile time; reducing the need for runtime checks in some cases.
[`Send`]: https://doc.rust-lang.org/core/marker/trait.Send.html
[`Sync`]: https://doc.rust-lang.org/core/marker/trait.Sync.html
When applied to embedded programs these *static checks* can be used, for example, to enforce that configuration of I/O interfaces is done properly. For instance, one can design an API where it is only possible to initialize a serial interface by first configuring the pins that will be used by the interface.
One can also statically check that operations, like setting a pin low, can only be performed on correctly configured peripherals. For example, trying to change the output state of a pin configured in floating input mode would raise a compile error.
And, as seen in the previous chapter, the concept of ownership can be applied to peripherals to ensure that only certain parts of a program can modify a peripheral. This *access control* makes software easier to reason about compared to the alternative of treating peripherals as global mutable state.

View File

@ -1,59 +1,59 @@
# Peripherals as State Machines
# 外围设备作为状态机
The peripherals of a microcontroller can be thought of as set of state machines. For example, the configuration of a simplified [GPIO pin] could be represented as the following tree of states:
微控制器的外围设备可以认为是一组状态机。例如,简化的[GPIO引脚]的配置可以表示为以下状态树:
[GPIO pin]: https://en.wikipedia.org/wiki/General-purpose_input/output
[GPIO引脚]:https://zh.wikipedia.org/wiki/通用输入/输出
* Disabled
* Enabled
* Configured as Output
* Output: High
* Output: Low
* Configured as Input
* Input: High Resistance
* Input: Pulled Low
* Input: Pulled High
* 禁用
* 已启用
* 配置为输出
* 输出:高
* 输出:低
* 配置为输入
* 输入:高电阻
* 输入:拉低
* 输入:拉高
If the peripheral starts in the `Disabled` mode, to move to the `Input: High Resistance` mode, we must perform the following steps:
如果外围设备以“禁用”模式启动,想要转移至“输入:高阻”模式,我们必须执行以下步骤:
1. Disabled
2. Enabled
3. Configured as Input
4. Input: High Resistance
1. 禁用模式
2. 启用
3. 配置为输入
4. 输入:高电阻
If we wanted to move from `Input: High Resistance` to `Input: Pulled Low`, we must perform the following steps:
如果要从“输入:高阻”转移至“输入:拉低”,则必须执行以下步骤:
1. Input: High Resistance
2. Input: Pulled Low
1. 输入:高阻
2. 输入:拉低
Similarly, if we want to move a GPIO pin from configured as `Input: Pulled Low` to `Output: High`, we must perform the following steps:
同样如果要将GPIO引脚从“输入:拉低”转移到“输出:高”,则必须执行以下步骤:
1. Input: Pulled Low
2. Configured as Input
3. Configured as Output
4. Output: High
1. 输入:拉低
2. 配置为输入
3. 配置为输出
4. 输出:高
## Hardware Representation
## 硬件表示
Typically the states listed above are set by writing values to given registers mapped to a GPIO peripheral. Let's define an imaginary GPIO Configuration Register to illustrate this:
通常上面列出的状态是通过将值写入映射到GPIO外设的给定寄存器来设置的。让我们虚拟一个的PIO配置寄存器来说明这一点:
| Name | Bit Number(s) | Value | Meaning | Notes |
| ---: | ------------: | ----: | ------: | ----: |
| enable | 0 | 0 | disabled | Disables the GPIO |
| | | 1 | enabled | Enables the GPIO |
| direction | 1 | 0 | input | Sets the direction to Input |
| | | 1 | output | Sets the direction to Output |
| input_mode | 2..3 | 00 | hi-z | Sets the input as high resistance |
| | | 01 | pull-low | Input pin is pulled low |
| | | 10 | pull-high | Input pin is pulled high |
| | | 11 | n/a | Invalid state. Do not set |
| output_mode | 4 | 0 | set-low | Output pin is driven low |
| | | 1 | set-high | Output pin is driven high |
| input_status | 5 | x | in-val | 0 if input is < 1.5v, 1 if input >= 1.5v |
|名字 |位号 | 值 |含义 |注意事项 |
| ---: |------------:| ----:|------: | ----: |
|启用| 0 | 0 | 禁用| 禁用GPIO |
| | | 1 |启用 |启用GPIO |
|方向| 1 | 0 |输入|将方向设置为输入|
| | | 1 |输出|将方向设置为输出|
| 输入模式 | 2..3 | 00 | 高电阻 |将输入设置为高阻|
| | | 01 |拉低|输入引脚被拉低|
| | | 10 |拉高|输入引脚被拉高|
| | | 11 |状态无效|不设|
| 输出模式 | 4 | 0 |低|引脚被驱动为低电平|
| | | 1 |高|输出引脚被驱动为高电平
| 输入状态 | 5 | x |输入值|如果输入<1.5v则为0如果输入> = 1.5v则为1 |
We _could_ expose the following structure in Rust to control this GPIO:
我们可以在Rust中定义以下结构来控制此GPIO:
```rust,ignore
```rust , ignore
/// GPIO interface
struct GpioConfig {
/// GPIO Configuration structure generated by svd2rust
@ -91,8 +91,8 @@ impl GpioConfig {
}
```
However, this would allow us to modify certain registers that do not make sense. For example, what happens if we set the `output_mode` field when our GPIO is configured as an input?
但是这能使我们对寄存器做没有任何意义的修改。例如在GPIO配置为输入时,修改输出模式(表中位4)会发生什么?
In general, use of this structure would allow us to reach states not defined by our state machine above: e.g. an output that is pulled low, or an input that is set high. For some hardware, this may not matter. On other hardware, it could cause unexpected or undefined behavior!
一般来说,这个结构体的定义虽然可以工作,但是让我们很容易进入没有定义的状态,比如输出模式时,输入引脚被配置为拉低;输入模式时,输出引脚被配置为高。对于某些硬件,这可能无所谓,但是在其他硬件上,这可能会导致意外或未定义的行为!
Although this interface is convenient to write, it doesn't enforce the design contracts set out by our hardware implementation.
总的来说,尽管编写该接口很方便,但是它并没有严格遵循硬件的设计约定(design contracts)。

View File

@ -0,0 +1,98 @@
# Peripherals as State Machines
The peripherals of a microcontroller can be thought of as set of state machines. For example, the configuration of a simplified [GPIO pin] could be represented as the following tree of states:
[GPIO pin]: https://en.wikipedia.org/wiki/General-purpose_input/output
* Disabled
* Enabled
* Configured as Output
* Output: High
* Output: Low
* Configured as Input
* Input: High Resistance
* Input: Pulled Low
* Input: Pulled High
If the peripheral starts in the `Disabled` mode, to move to the `Input: High Resistance` mode, we must perform the following steps:
1. Disabled
2. Enabled
3. Configured as Input
4. Input: High Resistance
If we wanted to move from `Input: High Resistance` to `Input: Pulled Low`, we must perform the following steps:
1. Input: High Resistance
2. Input: Pulled Low
Similarly, if we want to move a GPIO pin from configured as `Input: Pulled Low` to `Output: High`, we must perform the following steps:
1. Input: Pulled Low
2. Configured as Input
3. Configured as Output
4. Output: High
## Hardware Representation
Typically the states listed above are set by writing values to given registers mapped to a GPIO peripheral. Let's define an imaginary GPIO Configuration Register to illustrate this:
| Name | Bit Number(s) | Value | Meaning | Notes |
| ---: | ------------: | ----: | ------: | ----: |
| enable | 0 | 0 | disabled | Disables the GPIO |
| | | 1 | enabled | Enables the GPIO |
| direction | 1 | 0 | input | Sets the direction to Input |
| | | 1 | output | Sets the direction to Output |
| input_mode | 2..3 | 00 | hi-z | Sets the input as high resistance |
| | | 01 | pull-low | Input pin is pulled low |
| | | 10 | pull-high | Input pin is pulled high |
| | | 11 | n/a | Invalid state. Do not set |
| output_mode | 4 | 0 | set-low | Output pin is driven low |
| | | 1 | set-high | Output pin is driven high |
| input_status | 5 | x | in-val | 0 if input is < 1.5v, 1 if input >= 1.5v |
We _could_ expose the following structure in Rust to control this GPIO:
```rust , ignore
/// GPIO interface
struct GpioConfig {
/// GPIO Configuration structure generated by svd2rust
periph: GPIO_CONFIG,
}
impl GpioConfig {
pub fn set_enable(&mut self, is_enabled: bool) {
self.periph.modify(|_r, w| {
w.enable().set_bit(is_enabled)
});
}
pub fn set_direction(&mut self, is_output: bool) {
self.periph.modify(|r, w| {
w.direction().set_bit(is_output)
});
}
pub fn set_input_mode(&mut self, variant: InputMode) {
self.periph.modify(|_r, w| {
w.input_mode().variant(variant)
});
}
pub fn set_output_mode(&mut self, is_high: bool) {
self.periph.modify(|_r, w| {
w.output_mode.set_bit(is_high)
});
}
pub fn get_input_status(&self) -> bool {
self.periph.read().input_status().bit_is_set()
}
}
```
However, this would allow us to modify certain registers that do not make sense. For example, what happens if we set the `output_mode` field when our GPIO is configured as an input?
In general, use of this structure would allow us to reach states not defined by our state machine above: e.g. an output that is pulled low, or an input that is set high. For some hardware, this may not matter. On other hardware, it could cause unexpected or undefined behavior!
Although this interface is convenient to write, it doesn't enforce the design contracts set out by our hardware implementation.

View File

@ -1,9 +1,9 @@
# Typestate Programming
# 类型状态机(TypesState)编程
The concept of [typestates] describes the encoding of information about the current state of an object into the type of that object. Although this can sound a little arcane, if you have used the [Builder Pattern] in Rust, you have already started using Typestate Programming!
[typestates]就是通过对象的类型来表示对象的状态信息。尽管这听起来有些不可思议但是如果您在Rust中使用了[建造者模式],那么您已经开始使用类型状态机了.
[typestates]: https://en.wikipedia.org/wiki/Typestate_analysis
[Builder Pattern]: https://doc.rust-lang.org/1.0.0/style/ownership/builders.html
[typestates]:https://en.wikipedia.org/wiki/Typestate_analysis
[建造者模式]:https://doc.rust-lang.org/1.0.0/style/ownership/builders.html
```rust
pub mod foo_module {
@ -49,17 +49,17 @@ fn main() {
}
```
In this example, there is no direct way to create a `Foo` object. We must create a `FooBuilder`, and properly initialize it before we can obtain the `Foo` object we want.
在这个例子中,没有直接的方法来创建一个`Foo`对象。我们必须创建一个`FooBuilder`并正确地对其进行初始化,然后才能获得所需的`Foo`对象。
This minimal example encodes two states:
这个最小的示例对两种状态进行编码:
* `FooBuilder`, which represents an "unconfigured", or "configuration in process" state
* `Foo`, which represents a "configured", or "ready to use" state.
* `FooBuilder`,代表“未配置”或“正在配置”状态
* `Foo`,表示“已配置”或“准备使用”状态。
## Strong Types
## 强类型
Because Rust has a [Strong Type System], there is no easy way to magically create an instance of `Foo`, or to turn a `FooBuilder` into a `Foo` without calling the `into_foo()` method. Additionally, calling the `into_foo()` method consumes the original `FooBuilder` structure, meaning it can not be reused without the creation of a new instance.
由于Rust具有[强类型系统],因此没有简单的方法直接创建`Foo`实例,或将`FooBuilder`转换为`Foo`而无需调用`into_foo()`方法。另外,调用`into_foo()`方法会消耗原始的`FooBuilder`对象,这意味着如果不创建新实例就无法重用它。
[Strong Type System]: https://en.wikipedia.org/wiki/Strong_and_weak_typing
[强类型系统]: https://en.wikipedia.org/wiki/Strong_and_weak_typing
This allows us to represent the states of our system as types, and to include the necessary actions for state transitions into the methods that exchange one type for another. By creating a `FooBuilder`, and exchanging it for a `Foo` object, we have walked through the steps of a basic state machine.
这使我们可以将系统的状态表示为类型,并将状态转换所必需的动作包括在将一种类型转换换为另一种类型的方法中。通过创建一个 `FooBuilder`,并将其转换为一个`Foo`对象,我们实现了最基本的状态机。

View File

@ -0,0 +1,65 @@
# Typestate Programming
The concept of [typestates] describes the encoding of information about the current state of an object into the type of that object. Although this can sound a little arcane, if you have used the [Builder Pattern] in Rust, you have already started using Typestate Programming!
[typestates]: https://en.wikipedia.org/wiki/Typestate_analysis
[Builder Pattern]: https://doc.rust-lang.org/1.0.0/style/ownership/builders.html
```rust
pub mod foo_module {
#[derive(Debug)]
pub struct Foo {
inner: u32,
}
pub struct FooBuilder {
a: u32,
b: u32,
}
impl FooBuilder {
pub fn new(starter: u32) -> Self {
Self {
a: starter,
b: starter,
}
}
pub fn double_a(self) -> Self {
Self {
a: self.a * 2,
b: self.b,
}
}
pub fn into_foo(self) -> Foo {
Foo {
inner: self.a + self.b,
}
}
}
}
fn main() {
let x = foo_module::FooBuilder::new(10)
.double_a()
.into_foo();
println!("{:#?}", x);
}
```
In this example, there is no direct way to create a `Foo` object. We must create a `FooBuilder`, and properly initialize it before we can obtain the `Foo` object we want.
This minimal example encodes two states:
* `FooBuilder`, which represents an "unconfigured", or "configuration in process" state
* `Foo`, which represents a "configured", or "ready to use" state.
## Strong Types
Because Rust has a [Strong Type System], there is no easy way to magically create an instance of `Foo`, or to turn a `FooBuilder` into a `Foo` without calling the `into_foo()` method. Additionally, calling the `into_foo()` method consumes the original `FooBuilder` structure, meaning it can not be reused without the creation of a new instance.
[Strong Type System]: https://en.wikipedia.org/wiki/Strong_and_weak_typing
This allows us to represent the states of our system as types, and to include the necessary actions for state transitions into the methods that exchange one type for another. By creating a `FooBuilder`, and exchanging it for a `Foo` object, we have walked through the steps of a basic state machine.

View File

@ -1,8 +1,8 @@
# Zero Cost Abstractions
# 零成本抽象
Type states are also an excellent example of Zero Cost Abstractions - the ability to move certain behaviors to compile time execution or analysis. These type states contain no actual data, and are instead used as markers. Since they contain no data, they have no actual representation in memory at runtime:
类型状态机也是零成本抽象的一个很好的例子--能够将某些需要运行时执行和检查的行为提前到编译时。这些类型状态不包含实际数据,而是用作标记。由于它们不包含任何数据,因此它们在运行时不占用额外的内存空间:
```rust,ignore
```rust , ignore
use core::mem::size_of;
let _ = size_of::<Enabled>(); // == 0
@ -11,17 +11,18 @@ let _ = size_of::<PulledHigh>(); // == 0
let _ = size_of::<GpioConfig<Enabled, Input, PulledHigh>>(); // == 0
```
## Zero Sized Types
## 零大小类型
```rust,ignore
```rust , ignore
struct Enabled;
```
Structures defined like this are called Zero Sized Types, as they contain no actual data. Although these types act "real" at compile time - you can copy them, move them, take references to them, etc., however the optimizer will completely strip them away.
像这样定义的结构称为零大小类型,因为它们不包含实际数据。尽管这些类型在编译时表现为“真实”-您可以复制,移动它们,引用它们等,但是编译器在优化后就会像不存在一样。
In this snippet of code:
在这段代码中:
```rust,ignore
```rust , ignore
pub fn into_input_high_z(self) -> GpioConfig<Enabled, Input, HighZ> {
self.periph.modify(|_r, w| w.input_mode().high_z());
GpioConfig {
@ -33,10 +34,10 @@ pub fn into_input_high_z(self) -> GpioConfig<Enabled, Input, HighZ> {
}
```
The GpioConfig we return never exists at runtime. Calling this function will generally boil down to a single assembly instruction - storing a constant register value to a register location. This means that the type state interface we've developed is a zero cost abstraction - it uses no more CPU, RAM, or code space tracking the state of `GpioConfig`, and renders to the same machine code as a direct register access.
我们返回的GpioConfig在运行时永远不会存在。调用此函数实际上就是一条汇编指令-将一个常量写入到寄存器中。这意味着我们开发的类型状态机接口是一种零成本的抽象方法(zero cost abstraction)-它不需要使用CPURAM或代码空间来跟踪`GpioConfig`的状态,最终优化后与手写的直接写寄存器的代码相同。
## Nesting
## 嵌套
In general, these abstractions may be nested as deeply as you would like. As long as all components used are zero sized types, the whole structure will not exist at runtime.
通常,这些抽象对象可以任意嵌套,只要使用的所有对象都是零大小的类型,整个结构体在运行时就不会存在。
For complex or deeply nested structures, it may be tedious to define all possible combinations of state. In these cases, macros may be used to generate all implementations.
对于复杂或深度嵌套的结构,定义状态的所有可能组合会很繁琐, 这时可以借助宏生成所有的状态。

View File

@ -0,0 +1,42 @@
# Zero Cost Abstractions
Type states are also an excellent example of Zero Cost Abstractions - the ability to move certain behaviors to compile time execution or analysis. These type states contain no actual data, and are instead used as markers. Since they contain no data, they have no actual representation in memory at runtime:
```rust , ignore
use core::mem::size_of;
let _ = size_of::<Enabled>(); // == 0
let _ = size_of::<Input>(); // == 0
let _ = size_of::<PulledHigh>(); // == 0
let _ = size_of::<GpioConfig<Enabled, Input, PulledHigh>>(); // == 0
```
## Zero Sized Types
```rust , ignore
struct Enabled;
```
Structures defined like this are called Zero Sized Types, as they contain no actual data. Although these types act "real" at compile time - you can copy them, move them, take references to them, etc., however the optimizer will completely strip them away.
In this snippet of code:
```rust , ignore
pub fn into_input_high_z(self) -> GpioConfig<Enabled, Input, HighZ> {
self.periph.modify(|_r, w| w.input_mode().high_z());
GpioConfig {
periph: self.periph,
enabled: Enabled,
direction: Input,
mode: HighZ,
}
}
```
The GpioConfig we return never exists at runtime. Calling this function will generally boil down to a single assembly instruction - storing a constant register value to a register location. This means that the type state interface we've developed is a zero cost abstraction - it uses no more CPU, RAM, or code space tracking the state of `GpioConfig`, and renders to the same machine code as a direct register access.
## Nesting
In general, these abstractions may be nested as deeply as you would like. As long as all components used are zero sized types, the whole structure will not exist at runtime.
For complex or deeply nested structures, it may be tedious to define all possible combinations of state. In these cases, macros may be used to generate all implementations.

View File

@ -1 +1 @@
# Unsorted topics
# 其他主题

1
src/unsorted/index_en.md Normal file
View File

@ -0,0 +1 @@
# Unsorted topics

View File

@ -1,21 +1,12 @@
# Optimizations: the speed size tradeoff
# 优化:速度大小的权衡
Everyone wants their program to be super fast and super small but it's usually
not possible to have both characteristics. This section discusses the
different optimization levels that `rustc` provides and how they affect the
execution time and binary size of a program.
每个人都希望他们的程序超快,超小,但通常不可能兼具这两个特性。本节讨论`rustc` 提供的不同优化级别,以及它们如何影响程序的执行时间和二进制大小。
## No optimizations
## 没有优化
This is the default. When you call `cargo build` you use the development (AKA
`dev`) profile. This profile is optimized for debugging so it enables debug
information and does *not* enable any optimizations, i.e. it uses `-C opt-level
= 0`.
这是默认值。当您调用`cargo build`时,可以使用开发(也就是`dev`)配置文件。这个配置文件针对调试进行了优化,因此它启用调试信息并且*不*启用任何优化,即它使用`-C opt-level = 0`。
At least for bare metal development, debuginfo is zero cost in the sense that it
won't occupy space in Flash / ROM so we actually recommend that you enable
debuginfo in the release profile -- it is disabled by default. That will let you
use breakpoints when debugging release builds.
至少对于裸机开发而言debuginfo的成本为零因为它不会占用Flash/ROM中的空间因此我们建议您在发行配置文件中启用debuginfo(默认情况下处于禁用状态)。这样您就可以在调试发行版时使用断点。
``` toml
[profile.release]
@ -23,28 +14,20 @@ use breakpoints when debugging release builds.
debug = true
```
No optimizations is great for debugging because stepping through the code feels
like you are executing the program statement by statement, plus you can `print`
stack variables and function arguments in GDB. When the code is optimized, trying
to print variables results in `$0 = <value optimized out>` being printed.
禁用优化对调试很有效因为单步执行代码就像逐行执行代码一样而且可以在GDB中“打印”堆栈变量和函数参数。优化代码后尝试打印变量将导致打印`$0 = <value optimized out>`。
The biggest downside of the `dev` profile is that the resulting binary will be
huge and slow. The size is usually more of a problem because unoptimized
binaries can occupy dozens of KiB of Flash, which your target device may not
have -- the result: your unoptimized binary doesn't fit in your device!
dev配置文件的最大缺点是生成的二进制文件会很大且很慢。大小通常是一个更大的问题因为未优化的二进制文件可能会占用数十KiB的Flash而目标设备可能没有这么大空间这会导致未优化的二进制文件不适合您的设备
Can we have smaller, debugger friendly binaries? Yes, there's a trick.
我们可以使用较小的,调试器友好的二进制文件吗?是的,有个窍门。
### Optimizing dependencies
### 优化依赖
There's a Cargo feature named [`profile-overrides`] that lets you
override the optimization level of dependencies. You can use that feature to
optimize all dependencies for size while keeping the top crate unoptimized and
debugger friendly.
有一个名为[`profile-overrides`]的cargo特性可让您覆盖依赖的crate的优化级别。您可以使用该功能优化所有依赖的crate的大小同时保持项目自身不优化和对调试器的友好性。
[`profile-overrides`]: https://doc.rust-lang.org/cargo/reference/profiles.html#overrides
Here's an example:
这是一个例子:
``` toml
# Cargo.toml
@ -56,7 +39,8 @@ name = "app"
opt-level = "z" # +
```
Without the override:
没有覆盖默认优化时:
``` console
$ cargo size --bin app -- -A
@ -69,7 +53,8 @@ section size addr
.bss 4 0x20000000
```
With the override:
使用覆盖:
``` console
$ cargo size --bin app -- -A
@ -81,13 +66,7 @@ section size addr
.data 0 0x20000000
.bss 4 0x20000000
```
That's a 6 KiB reduction in Flash usage without any loss in the debuggability of
the top crate. If you step into a dependency then you'll start seeing those
`<value optimized out>` messages again but it's usually the case that you want
to debug the top crate and not the dependencies. And if you *do* need to debug a
dependency then you can use the `profile-overrides` feature to exclude a
particular dependency from being optimized. See example below:
闪存使用量减少了6 KiB而项目自身的可调试性没有任何损失。如果在调试时,您进入依赖项,您将再次看到那些`<value Optimized out>`消息,但是通常情况下,您要调试的是项目自身而不是依赖项。而且如果您需要调试依赖项,则可以使用`profile-overrides`特性来排除特定的依赖项,以使其不被优化。请参见下面的示例:
``` toml
# ..
@ -102,34 +81,21 @@ codegen-units = 1 # better optimizations
opt-level = "z"
```
Now the top crate and `cortex-m-rt` are debugger friendly!
现在,项目自身和`cortex-m-rt`都对调试器友好了!
## Optimize for speed
## 优化速度
As of 2018-09-18 `rustc` supports three "optimize for speed" levels: `opt-level
= 1`, `2` and `3`. When you run `cargo build --release` you are using the release
profile which defaults to `opt-level = 3`.
自2018年9月18日起rustc支持三种“优化速度”级别`opt-level = 1,2,3`。当您运行`cargo build --release`时,您使用的是发布配置文件,默认为`opt-level = 3`。
Both `opt-level = 2` and `3` optimize for speed at the expense of binary size,
but level `3` does more vectorization and inlining than level `2`. In
particular, you'll see that at `opt-level` equal to or greater than `2` LLVM will
unroll loops. Loop unrolling has a rather high cost in terms of Flash / ROM
(e.g. from 26 bytes to 194 for a zero this array loop) but can also halve the
execution time given the right conditions (e.g. number of iterations is big
enough).
`opt-level = 2` 和`3`都针对速度进行了优化但以二进制大小为代价3级比2级进行了更多的矢量化和内联。特别是您将看到在opt-level等于或大于2的情况下LLVM可能会展开循环。就Flash/ROM而言循环展开具有相当高的成本(例如对于一个将数组清零的循环大小可能从26个字节到增大到194个字节),但在合适的条件下(例如迭代次数足够大),也可以将执行时间减半。
Currently there's no way to disable loop unrolling in `opt-level = 2` and `3` so
if you can't afford its cost you should optimize your program for size.
当前没有办法在 `opt-level = 2,3`时禁用循环展开,因此,如果您负担不起其成本,则应针对大小优化程序。
## Optimize for size
## 优化尺寸
As of 2018-09-18 `rustc` supports two "optimize for size" levels: `opt-level =
"s"` and `"z"`. These names were inherited from clang / LLVM and are not too
descriptive but `"z"` is meant to give the idea that it produces smaller
binaries than `"s"`.
从2018年9月18日开始rustc支持两个“大小优化”级别`opt-level = s,z`。这些名称是从clang/LLVM继承的所以描述性不太强“z”的含义是产生比“s”更小的二进制文件。
If you want your release binaries to be optimized for size then change the
`profile.release.opt-level` setting in `Cargo.toml` as shown below.
如果您希望优化发布二进制文件的大小,请按如下所示,在Cargo.toml中更改profile.release.opt-level设置。
``` toml
[profile.release]
@ -137,18 +103,9 @@ If you want your release binaries to be optimized for size then change the
opt-level = "s"
```
These two optimization levels greatly reduce LLVM's inline threshold, a metric
used to decide whether to inline a function or not. One of Rust principles are
zero cost abstractions; these abstractions tend to use a lot of newtypes and
small functions to hold invariants (e.g. functions that borrow an inner value
like `deref`, `as_ref`) so a low inline threshold can make LLVM miss
optimization opportunities (e.g. eliminate dead branches, inline calls to
closures).
这两个优化级别大大降低了LLVM的内联阈值该阈值用于确定是否内联函数。 Rust原则之一是零成本抽象。这些抽象倾向于使用大量的新类型和小的函数来保存不变式(例如,诸如“ deref”“as_ref”之类的借用内部值的函数)因此低的内联阈值会使LLVM错过优化机会(例如,消除无效分支,闭包的内联)。
When optimizing for size you may want to try increasing the inline threshold to
see if that has any effect on the binary size. The recommended way to change the
inline threshold is to append the `-C inline-threshold` flag to the other
rustflags in `.cargo/config`.
在优化大小时,您可能想尝试增加内联阈值,以查看这是否对二进制大小有影响。推荐的更改内联阈值的方法是将`-C inline-threshold` 参数附加到`.cargo/config`中的rustflags。
``` toml
# .cargo/config
@ -160,14 +117,13 @@ rustflags = [
]
```
What value to use? [As of 1.29.0 these are the inline thresholds that the
different optimization levels use][inline-threshold]:
内联阈值使用什么值合适从1.29.0开始,下面是不同优化级别使用的[内联阈值]:
[inline-threshold]: https://github.com/rust-lang/rust/blob/1.29.0/src/librustc_codegen_llvm/back/write.rs#L2105-L2122
[在线阈值]:https://github.com/rust-lang/rust/blob/1.29.0/src/librustc_codegen_llvm/back/write.rs#L2105-L2122
- `opt-level = 3` uses 275
- `opt-level = 2` uses 225
- `opt-level = "s"` uses 75
- `opt-level = "z"` uses 25
- `opt-level = 3` 使用275
- `opt-level = 2` 使用225
- `opt-level =“ s”` 使用75
- `opt-level =“ z” `使用25
You should try `225` and `275` when optimizing for size.
在优化大小时应尝试使用较大的内联阈值比如“225”和“275”。

View File

@ -0,0 +1,128 @@
# Optimizations: the speed size tradeoff
Everyone wants their program to be super fast and super small but it's usually not possible to have both characteristics. This section discusses the different optimization levels that `rustc` provides and how they affect the execution time and binary size of a program.
## No optimizations
This is the default. When you call `cargo build` you use the development (AKA `dev`) profile. This profile is optimized for debugging so it enables debug information and does *not* enable any optimizations, i.e. it uses `-C opt-level = 0`.
At least for bare metal development, debuginfo is zero cost in the sense that it won't occupy space in Flash / ROM so we actually recommend that you enable debuginfo in the release profile -- it is disabled by default. That will let you use breakpoints when debugging release builds.
``` toml
[profile.release]
# symbols are nice and they don't increase the size on Flash
debug = true
```
No optimizations is great for debugging because stepping through the code feels like you are executing the program statement by statement, plus you can `print` stack variables and function arguments in GDB. When the code is optimized, trying to print variables results in `$0 = <value optimized out>` being printed.
The biggest downside of the `dev` profile is that the resulting binary will be huge and slow. The size is usually more of a problem because unoptimized binaries can occupy dozens of KiB of Flash, which your target device may not have -- the result: your unoptimized binary doesn't fit in your device!
Can we have smaller, debugger friendly binaries? Yes, there's a trick.
### Optimizing dependencies
There's a Cargo feature named [`profile-overrides`] that lets you override the optimization level of dependencies. You can use that feature to optimize all dependencies for size while keeping the top crate unoptimized and debugger friendly.
[`profile-overrides`]: https://doc.rust-lang.org/cargo/reference/profiles.html#overrides
Here's an example:
``` toml
# Cargo.toml
[package]
name = "app"
# ..
[profile.dev.package."*"] # +
opt-level = "z" # +
```
Without the override:
``` console
$ cargo size --bin app -- -A
app :
section size addr
.vector_table 1024 0x8000000
.text 9060 0x8000400
.rodata 1708 0x8002780
.data 0 0x20000000
.bss 4 0x20000000
```
With the override:
``` console
$ cargo size --bin app -- -A
app :
section size addr
.vector_table 1024 0x8000000
.text 3490 0x8000400
.rodata 1100 0x80011c0
.data 0 0x20000000
.bss 4 0x20000000
```
That's a 6 KiB reduction in Flash usage without any loss in the debuggability of the top crate. If you step into a dependency then you'll start seeing those `<value optimized out>` messages again but it's usually the case that you want to debug the top crate and not the dependencies. And if you *do* need to debug a dependency then you can use the `profile-overrides` feature to exclude a particular dependency from being optimized. See example below:
``` toml
# ..
# don't optimize the `cortex-m-rt` crate
[profile.dev.package.cortex-m-rt] # +
opt-level = 0 # +
# but do optimize all the other dependencies
[profile.dev.package."*"]
codegen-units = 1 # better optimizations
opt-level = "z"
```
Now the top crate and `cortex-m-rt` are debugger friendly!
## Optimize for speed
As of 2018-09-18 `rustc` supports three "optimize for speed" levels: `opt-level = 1`, `2` and `3`. When you run `cargo build --release` you are using the release profile which defaults to `opt-level = 3`.
Both `opt-level = 2` and `3` optimize for speed at the expense of binary size,
but level `3` does more vectorization and inlining than level `2`. In particular, you'll see that at `opt-level` equal to or greater than `2` LLVM will unroll loops. Loop unrolling has a rather high cost in terms of Flash / ROM (e.g. from 26 bytes to 194 for a zero this array loop) but can also halve the execution time given the right conditions (e.g. number of iterations is big enough).
Currently there's no way to disable loop unrolling in `opt-level = 2` and `3` so if you can't afford its cost you should optimize your program for size.
## Optimize for size
As of 2018-09-18 `rustc` supports two "optimize for size" levels: `opt-level = "s"` and `"z"`. These names were inherited from clang / LLVM and are not too descriptive but `"z"` is meant to give the idea that it produces smaller binaries than `"s"`.
If you want your release binaries to be optimized for size then change the `profile.release.opt-level` setting in `Cargo.toml` as shown below.
``` toml
[profile.release]
# or "z"
opt-level = "s"
```
These two optimization levels greatly reduce LLVM's inline threshold, a metric used to decide whether to inline a function or not. One of Rust principles are zero cost abstractions; these abstractions tend to use a lot of newtypes and small functions to hold invariants (e.g. functions that borrow an inner value like `deref`, `as_ref`) so a low inline threshold can make LLVM miss optimization opportunities (e.g. eliminate dead branches, inline calls to closures).
When optimizing for size you may want to try increasing the inline threshold to see if that has any effect on the binary size. The recommended way to change the inline threshold is to append the `-C inline-threshold` flag to the other rustflags in `.cargo/config`.
``` toml
# .cargo/config
# this assumes that you are using the cortex-m-quickstart template
[target.'cfg(all(target_arch = "arm", target_os = "none"))']
rustflags = [
# ..
"-C", "inline-threshold=123", # +
]
```
What value to use? [As of 1.29.0 these are the inline thresholds that the different optimization levels use][inline-threshold]:
[inline-threshold]: https://github.com/rust-lang/rust/blob/1.29.0/src/librustc_codegen_llvm/back/write.rs#L2105-L2122
- `opt-level = 3` uses 275
- `opt-level = 2` uses 225
- `opt-level = "s"` uses 75
- `opt-level = "z"` uses 25
You should try `225` and `275` when optimizing for size.