Benchmark

1.2.0

Swift benchmark runner with many performance metrics and great CI support
ordo-one/package-benchmark

What's New

1.2.0

2023-03-22T16:46:10Z

1.2.0 (2023-03-22)

To run benchmarks without jemalloc installed:

BENCHMARK_DISABLE_JEMALLOC=1 swift package benchmark

Features

  • minor: Make it possible to build and run without jemalloc available (#127) (93b8ae9)

Swift Linux build Swift macOS build Swift address sanitizer Swift thread sanitizer codecov

Benchmark

Benchmark allows you to easily create sophisticated Swift performance benchmarks

Overview

Performance is a key feature for many apps and frameworks. Benchmark helps make it easy to measure and track many different metrics that affects performance, such as CPU usage, memory usage and use of operating system resources such as threads and system calls, as well as completely custom metric counters.

Benchmark works on both macOS and Linux and supports several key workflows for performance measurements:

Benchmark provides a quick way for validation of performance metrics, while other more specialized tools such as Instruments, DTrace, Heaptrack, Leaks, Sample and more can be used for finding root causes for any deviations found.

Benchmark is suitable for both smaller ad-hoc benchmarks focusing on execution time and more extensive benchmarks that care about several additional metrics such as memory allocations, syscalls, thread usage, context switches, and more. Thanks to the Histogram foundation it’s especially suitable for capturing latency statistics for large number of samples.

Documentation

Documentation on how to use Benchmark in your Swift package can be viewed online (hosted by the Swift Package Index, thanks!) or inside Xcode using Build Documentation. Additionally the command plugin provides help information if you run swift package benchmark help from the command line.

Adding dependencies and getting started

Add a package dependency to Package.swift

To add the dependency on Benchmark, add a dependency to your package:

.package(url: "https://github.com/ordo-one/package-benchmark", .upToNextMajor(from: "1.0.0")),

Add benchmark exectuable targets

Create an executable target in Package.swift for each benchmark suite you want to measure. The source for all benchmarks must reside in a directory named Benchmarks in the root of your swift package. The benchmark plugin uses this directory combined with the executable target information to automatically discover and run your benchmarks. For each executable target, include dependencies on both Benchmark (supporting framework) and BenchmarkPlugin (boilerplate generator) from package-benchmark. The following example shows an benchmark suite named My-Benchmark with the required dependency on Benchmark and the source files for the benchmark that reside in the directory Benchmarks/My-Benchmark:

.executableTarget(
    name: "My-Benchmark",
    dependencies: [
        .product(name: "Benchmark", package: "package-benchmark"),
        .product(name: "BenchmarkPlugin", package: "package-benchmark"),
    ],
    path: "Benchmarks/My-Benchmark"
),

Writing benchmarks

There are documentation available as well as a a sample project using various aspects of this package in practice.

Sample benchmark code

import Benchmark

let benchmarks = {
    Benchmark("Minimal benchmark") { benchmark in
      // measure something here
    }

    Benchmark("All metrics, full concurrency, async",
              configuration: .init(metrics: BenchmarkMetric.all,
                                   maxDuration: .seconds(10)) { benchmark in
        let _ = await withTaskGroup(of: Void.self, returning: Void.self, body: { taskGroup in
            for _ in 0..<80  {
                taskGroup.addTask {
                    dummyCounter(defaultCounter()*1000)
                }
            }
            for await _ in taskGroup {
            }
        })
    }
}

Running benchmarks

To execute all defined benchmarks, simply run:

swift package benchmark

Please see the documentation for more detail on all options.

Sample output benchmark run

image

Sample output benchmark grouped by metric

image

Sample output delta comparison

image

Sample output threshold deviation check

image

Sample usage of YouPlot

Install YouPlot

swift package benchmark run --filter InternalUTCClock-now --metric wallClock --format histogramPercentiles --path stdout --no-progress | uplot lineplot -H

image

JMH Visualization

Using jmh.morethan.io

image

image

Output

The default text output from Benchmark is oriented around the five-number summary percentiles, plus the last decile (p90) and the last percentile (p99) - it's thus a variation of a seven-figure summary with the focus on the 'bad' end of results (as those are what we typically care about addressing). We've found that focusing on percentiles rather than average or standard deviations, is more useful for a wider range of benchmark measurements and gives a deeper understanding of the results. Percentiles allows for a consistent way of expressing benchmark results of both throughput and latency measurements (which typically do not have a standardized distribution, being almost always multi-modal in nature). This multi-modal nature of the latency measurements leads to the common statistical measures of mean and standard deviation being potentially misleading.

API and file format stability

The API will be deemed stable as of 1.0.0 and follows semantical versioning for future releases.

The export file formats that are externally defined (e.g. JMH or HDR Histogram formats) will follow the upstream definitions if they change, but have been quite stable for several years.

The Histogram codable representation is not stable and may change if the Histogram implementation changes.

The benchmark internal baseline representation (stored in .benchmarkBaselines) is not stable and is not viewed as public API and may break over time.

For those wanting to save benchmark data over time, it's recommended to export data in e.g. HDR Histogram representations (percentiles, average, stddev etc) or simply post processing the histogramSamples format (which is raw data) to your desired representation.

PR:s for additional standardized formats are welcome, as the export formats are the intended stable interface for saving such data.

CI build note

The badges above shows that macOS builds are failing on the CI as GitHub still haven't provided runners for macOS 13 Ventura, it works in practice.

Description

  • Swift Tools 5.7.0
View More Packages from this Author

Dependencies

Last updated: Wed Mar 29 2023 04:28:11 GMT-0500 (GMT-05:00)