mirror of
https://github.com/kdlucas/byte-unixbench.git
synced 2024-12-11 23:30:07 +08:00
Merge pull request #30 from mbrukman/readme-html-to-markdown
Fixed Markdown formatting.
This commit is contained in:
commit
1274c14753
77
README.md
77
README.md
@ -1,5 +1,6 @@
|
||||
# byte-unixbench
|
||||
<b>UnixBench</b> is the original BYTE UNIX benchmark suite, updated and revised by many people over the years.
|
||||
|
||||
**UnixBench** is the original BYTE UNIX benchmark suite, updated and revised by many people over the years.
|
||||
|
||||
The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple
|
||||
tests are used to test various aspects of the system's performance. These test results are then compared to the
|
||||
@ -12,66 +13,76 @@ Multi-CPU systems are handled. If your system has multiple CPUs, the default beh
|
||||
twice -- once with one copy of each test program running at a time, and once with N copies, where N is the number of
|
||||
CPUs. This is designed to allow you to assess:
|
||||
|
||||
<ul>
|
||||
<li>the performance of your system when running a single task</li>
|
||||
<li>the performance of your system when running multiple tasks</li>
|
||||
<li>the gain from your system's implementation of parallel processing</li>
|
||||
</ul>
|
||||
* the performance of your system when running a single task
|
||||
* the performance of your system when running multiple tasks
|
||||
* the gain from your system's implementation of parallel processing
|
||||
|
||||
Do be aware that this is a system benchmark, not a CPU, RAM or disk benchmark. The results will depend not only on
|
||||
your hardware, but on your operating system, libraries, and even compiler.
|
||||
|
||||
<h2>History</h2>
|
||||
<b>UnixBench</b> was first started in 1983 at Monash University, as a simple synthetic benchmarking application. It
|
||||
was then taken and expanded by <b>Byte Magazine</b>. Linux mods by <em>Jon Tombs, and original authors Ben Smith,
|
||||
Rick Grehan, and Tom Yager</em>. The tests compare Unix systems by comparing their results to a set of scores set
|
||||
## History
|
||||
|
||||
**UnixBench** was first started in 1983 at Monash University, as a simple synthetic benchmarking application. It
|
||||
was then taken and expanded by **Byte Magazine**. Linux mods by Jon Tombs, and original authors Ben Smith,
|
||||
Rick Grehan, and Tom Yager. The tests compare Unix systems by comparing their results to a set of scores set
|
||||
by running the code on a benchmark system, which is a SPARCstation 20-61 (rated at 10.0).
|
||||
|
||||
<em>David C. Niemi</em> maintained the program for quite some time, and made some major modifications and updates,
|
||||
and produced <b>UnixBench 4</b>. He later gave the program to <em>Ian Smith</em> to maintain. Ian subsequently made
|
||||
David C. Niemi maintained the program for quite some time, and made some major modifications and updates,
|
||||
and produced **UnixBench 4**. He later gave the program to Ian Smith to maintain. Ian subsequently made
|
||||
some major changes and revised it from version 4 to version 5.
|
||||
|
||||
Thanks to Ian Smith for managing the release up to 5.1.3. As of the next release (5.2) Anthony F. Voellm is going to help maintain the code base. The releases will happen once there are enough pull requests to warrent a new release.<p>
|
||||
Thanks to Ian Smith for managing the release up to 5.1.3. As of the next release (5.2), [Anthony F. Voellm](https://github.com/voellm) is going to help maintain the code base. The releases will happen once there are enough pull requests to warrant a new release.
|
||||
|
||||
The general process will be the following:
|
||||
<ul>
|
||||
<li> Open a bug annoucing a new release will happen.
|
||||
<li> Everything of the dev branch will be run.
|
||||
<li> Code will move from the dev branch into main and be tagged. Bug fix releases with increment the subversion and major functionality changes will increase the major version.
|
||||
</ul>
|
||||
|
||||
<h2>Included Tests</h2>
|
||||
* Open a bug announcing that a new release will happen.
|
||||
* Everything on the `dev` branch will be run.
|
||||
* Code will move from the `dev` branch into `main` and be tagged. Bug fix releases with increment the subversion and major functionality changes will increase the major version.
|
||||
|
||||
## Included Tests
|
||||
|
||||
UnixBench consists of a number of individual tests that are targeted at specific areas. Here is a summary of what
|
||||
each test does:
|
||||
|
||||
<h3>Dhrystone</h3>
|
||||
### Dhrystone
|
||||
|
||||
Developed by Reinhold Weicker in 1984. This benchmark is used to measure and compare the performance of computers. The test focuses on string handling, as there are no floating point operations. It is heavily influenced by hardware and software design, compiler and linker options, code optimization, cache memory, wait states, and integer data types.
|
||||
|
||||
<h3>Whetstone</h3>
|
||||
This test measures the speed and efficiency of floating-point operations. This test contains several modules that are meant to represent a mix of operations typically performed in scientific applications. A wide variety of C functions including sin, cos, sqrt, exp, and log are used as well as integer and floating-point math operations, array accesses, conditional branches, and procedure calls. This test measure both integer and floating-point arithmetic.
|
||||
### Whetstone
|
||||
|
||||
<h3>Execl Throughput</h3>
|
||||
This test measures the number of execl calls that can be performed per second. Execl is part of the exec family of functions that replaces the current process image with a new process image. It and many other similar commands are front ends for the function execve().
|
||||
This test measures the speed and efficiency of floating-point operations. This test contains several modules that are meant to represent a mix of operations typically performed in scientific applications. A wide variety of C functions including `sin`, `cos`, `sqrt`, `exp`, and `log` are used as well as integer and floating-point math operations, array accesses, conditional branches, and procedure calls. This test measure both integer and floating-point arithmetic.
|
||||
|
||||
### `execl` Throughput
|
||||
|
||||
This test measures the number of `execl` calls that can be performed per second. `execl` is part of the exec family of functions that replaces the current process image with a new process image. It and many other similar commands are front ends for the function `execve()`.
|
||||
|
||||
### File Copy
|
||||
|
||||
<h3>File Copy</h3>
|
||||
This measures the rate at which data can be transferred from one file to another, using various buffer sizes. The file read, write and copy tests capture the number of characters that can be written, read and copied in a specified time (default is 10 seconds).
|
||||
|
||||
<h3>Pipe Throughput</h3>
|
||||
### Pipe Throughput
|
||||
|
||||
A pipe is the simplest form of communication between processes. Pipe throughtput is the number of times (per second) a process can write 512 bytes to a pipe and read them back. The pipe throughput test has no real counterpart in real-world programming.
|
||||
|
||||
<h3>Pipe-based Context Switching</h3>
|
||||
### Pipe-based Context Switching
|
||||
|
||||
This test measures the number of times two processes can exchange an increasing integer through a pipe. The pipe-based context switching test is more like a real-world application. The test program spawns a child process with which it carries on a bi-directional pipe conversation.
|
||||
|
||||
<h3>Process Creation</h3>
|
||||
### Process Creation
|
||||
|
||||
This test measure the number of times a process can fork and reap a child that immediately exits. Process creation refers to actually creating process control blocks and memory allocations for new processes, so this applies directly to memory bandwidth. Typically, this benchmark would be used to compare various implementations of operating system process creation calls.
|
||||
|
||||
<h3>Shell Scripts</h3>
|
||||
### Shell Scripts
|
||||
|
||||
The shells scripts test measures the number of times per minute a process can start and reap a set of one, two, four and eight concurrent copies of a shell scripts where the shell script applies a series of transofrmation to a data file.
|
||||
|
||||
<h3>System Call Overhead</h3>
|
||||
This estimates the cost of entering and leaving the operating system kernel, i.e. the overhead for performing a system call. It consists of a simple program repeatedly calling the getpid (which returns the process id of the calling process) system call. The time to execute such calls is used to estimate the cost of entering and exiting the kernel.
|
||||
### System Call Overhead
|
||||
|
||||
<h3>Graphical Tests</h3>
|
||||
Both 2D and 3D graphical tests are provided; at the moment, the 3D suite in particular is very limited, consisting of the "ubgears" program. These tests are intended to provide a very rough idea of the system's 2D and 3D graphics performance. Bear in mind, of course, that the reported performance will depend not only on hardware, but on whether your system has appropriate drivers for it.
|
||||
This estimates the cost of entering and leaving the operating system kernel, i.e., the overhead for performing a system call. It consists of a simple program repeatedly calling the `getpid` (which returns the process id of the calling process) system call. The time to execute such calls is used to estimate the cost of entering and exiting the kernel.
|
||||
|
||||
### Graphical Tests
|
||||
|
||||
Both 2D and 3D graphical tests are provided; at the moment, the 3D suite in particular is very limited, consisting of the `ubgears` program. These tests are intended to provide a very rough idea of the system's 2D and 3D graphics performance. Bear in mind, of course, that the reported performance will depend not only on hardware, but on whether your system has appropriate drivers for it.
|
||||
|
||||
# License
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user