Use the Macrobenchmark library for testing larger use-cases of your application,
including application startup and complex UI manipulations, such as scrolling a
RecyclerView
or running animations. If you're looking to test smaller areas of
your code, refer to Microbenchmark library.
The library outputs benchmarking results to both the Android Studio console and a JSON file with more detail. It also provides trace files that you can load and analyze in Android Studio.
Use the Macrobenchmark library in a continuous integration (CI) environment as described in Run benchmarks in Continuous Integration.
Baseline Profiles can be generated using Macrobenchmark. Follow the guide below to set up the Marcobenchmark library and then create a Baseline Profile.
Project setup
We recommend that you use Macrobenchmark with the latest version of Android Studio, as there are new features in that version of the IDE that integrate with Macrobenchmark.
Setup the Macrobenchmark module
Macro benchmarks require a
com.android.test
module, separate from your app code, that is responsible for running the tests
that measure your app.
In Android Studio, a template is available to simplify Macrobenchmark module setup. The benchmarking module template automatically creates a module in your project for measuring the app built by an app module, including a sample startup benchmark.
To use the module template to create a new module, do the following:
Right-click your project or module in the Project panel in Android Studio and click New > Module.
Select Benchmark from the Templates pane.
You can customize the target application (the app to be benchmarked), as well as package and module name for the new macrobenchmark module.
Click Finish.
Set up the application
To benchmark an app (called the target of the macro benchmark), that app must
be profileable
, which enables reading detailed trace information without
affecting performance. The module wizard adds the <profileable>
tag
automatically to the app's AndroidManifest.xml
file.
Configure the benchmarked app as close to the release version (or production) as
possible. Set it up as non-debuggable and preferably with minification on, which
improves performance. You typically do this by creating a copy of the release
variant, which performs the same, but is signed locally with debug keys.
Alternatively you can use initWith
to instruct Gradle to do it for you:
Groovy
buildTypes { val release = getByName("release") { isMinifyEnabled = true isShrinkResources = true proguardFiles(getDefaultProguardFile("proguard-android-optimize.txt"), "proguard-rules.pro") } create("benchmark") { initWith(release) signingConfig = signingConfigs.getByName("debug") proguardFiles("benchmark-rules.pro") } }
Kotlin
buildTypes { getByName("release") { isMinifyEnabled = true isShrinkResources = true proguardFiles(getDefaultProguardFile("proguard-android-optimize.txt")) } create("benchmark") { initWith(getByName("release")) signingConfig = signingConfigs.getByName("debug") } }
Perform a Gradle sync, open the Build Variants panel on the left, and select the benchmark variant of both the app and the Macrobenchmark module. This ensures running the benchmark will build and test the correct variant of your app:
(Optional) Set up multi-module application
If your app has more than one Gradle module, you need to make sure your build
scripts know which build variant to compile. Without this, the newly added
benchmark
buildType causes the build to fail and provides the following error
message:
> Could not resolve project :shared.
Required by:
project :app
> No matching variant of project :shared was found.
...
You can fix the issue by adding matchingFallbacks
property into the
benchmark
buildType of your :macrobenchmark
and :app
modules. The rest of
your Gradle modules can have the same configuration as before.
Groovy
benchmark { initWith buildTypes.release signingConfig signingConfigs.debug matchingFallbacks = ['release'] }
Kotlin
create("benchmark") { initWith(getByName("release")) signingConfig = signingConfigs.getByName("debug") matchingFallbacks += listOf('release') }
When selecting the build variants in your project, choose benchmark
for :app
and :macrobenchmark
modules, and release
for any other modules you have in
your app, as seen in the following image:
For more information, check variant-aware dependency management.
(Optional) Set up product flavors
If you have multiple product flavors set in your app, you need to configure the
:macrobenchmark
module, so that it knows what product flavor of your app to build
and benchmark. Without this configuration, you may get a build error similar to
with multiple Gradle modules:
Could not determine the dependencies of task ':macrobenchmark:connectedBenchmarkAndroidTest'.
> Could not determine the dependencies of null.
> Could not resolve all task dependencies for configuration ':macrobenchmark:benchmarkTestedApks'.
> Could not resolve project :app.
Required by:
project :macrobenchmark
> The consumer was configured to find a runtime of a component, as well as attribute 'com.android.build.api.attributes.BuildTypeAttr' with value 'benchmark', attribute 'com.android.build.api.attributes.AgpVersionAttr' with value '7.3.0'. However we cannot choose between the following variants of project :app:
- demoBenchmarkRuntimeElements
- productionBenchmarkRuntimeElements
All of them match the consumer attributes:
...
For this guide we're using the two product flavors in our :app
module --
demo
and production
as you can see in the following snippet:
Groovy
flavorDimensions 'environment' productFlavors { demo { dimension 'environment' // ... } production { dimension 'environment' // ... } }
Kotlin
flavorDimensions += "environment" productFlavors { create("demo") { dimension = "environment" // ... } create("production") { dimension = "environment" // ... } }
There are two ways to configure benchmarking with multiple product flavors:
Use missingDimensionStrategy
Specifying missingDimensionStrategy
in the defaultConfig
of the
:macrobenchmark
module tells the build system to fallback to the flavor
dimension. You need to specify which dimensions to use if they are not
found in the module. In the following example, the production
flavor is used
as the default dimension:
Groovy
defaultConfig { missingDimensionStrategy "environment", "production" }
Kotlin
defaultConfig { missingDimensionStrategy("environment", "production") }
This way, the :macrobenchmark
module is able to only build and benchmark the
specified product flavor, which is helpful if you know that only one product
flavor has the proper configuration to be benchmarked.
Define product flavors in :macrobenchmark
module
If you want to build and benchmark other product flavors, you need to
define them in the :macrobenchmark
module. Specify it similarly as in the
:app
module, but only assign productFlavors
to a dimension
--
no other settings are required:
Groovy
flavorDimensions 'environment' productFlavors { demo { dimension 'environment' } production { dimension 'environment' } }
Kotlin
flavorDimensions += "environment" productFlavors { create("demo") { dimension = "environment" } create("production") { dimension = "environment" } }
After defining and syncing the project, choose the desired build variant from the Build Variants pane:
For more information, see Resolve build errors related to variant matching.
Create a macrobenchmark class
Benchmark testing is provided through the MacrobenchmarkRule
JUnit4 rule API
in the Macrobenchmark library. It contains a measureRepeated
method
that lets you specify various conditions on how to run and benchmark the target application.
You need to at least specify the packageName
of the target application, what
metrics
you want to measure and how many iterations
the benchmark should
run.
Kotlin
@LargeTest @RunWith(AndroidJUnit4::class) class SampleStartupBenchmark { @get:Rule val benchmarkRule = MacrobenchmarkRule() @Test fun startup() = benchmarkRule.measureRepeated( packageName = TARGET_PACKAGE, metrics = listOf(StartupTimingMetric()), iterations = DEFAULT_ITERATIONS, setupBlock = { // Press home button before each run to ensure the starting activity isn't visible. pressHome() } ) { // starts default launch activity startActivityAndWait() } }
Java
@LargeTest @RunWith(AndroidJUnit4.class) public class SampleStartupBenchmark { @Rule public MacrobenchmarkRule benchmarkRule = new MacrobenchmarkRule(); @Test public void startup() { benchmarkRule.measureRepeated( /* packageName */ TARGET_PACKAGE, /* metrics */ Arrays.asList(new StartupTimingMetric()), /* iterations */ 5, /* measureBlock */ scope -> { // starts default launch activity scope.startActivityAndWait(); return Unit.INSTANCE; } ); } }
For all the options on how to customize your benchmark, see Customize the benchmarks section.
Run the benchmark
Run the test from within Android Studio to measure the performance of your app
on your device. You can run the benchmarks the same as you would run any other
@Test
using the gutter action next to your test class or method, as shown in
the following image.
You can also run all benchmarks in a Gradle module from the command line by executing the connectedCheck command:
./gradlew :macrobenchmark:connectedCheck
Or to run a single test:
./gradlew :macrobenchmark:connectedCheck -P android.testInstrumentationRunnerArguments.class=com.example.macrobenchmark.startup.SampleStartupBenchmark#startup
See the Benchmarking in CI section for information on how to run and monitor benchmarks in continuous integration.
Benchmark results
After successful benchmark run, metrics are displayed directly in Android Studio and are also output for CI usage in a JSON file. Each measured iteration captures a separate system trace. You can open these result traces by clicking on one of the links in the Test Results pane, as shown in the following image.
When the trace is loaded, Android Studio prompts you to select the process to analyze. The selection is pre-populated with the target app process:
Once the trace file is loaded, Studio shows the results in the CPU profiler tool:
JSON reports and any profiling traces are also automatically copied from device to host. These are written on the host machine at:
project_root/module/build/outputs/connected_android_test_additional_output/debugAndroidTest/connected/device_id/
Access trace files manually
If you want to use the Perfetto tool to analyze a trace file, there are extra steps involved. Perfetto allows you to inspect all processes happening across the device during the trace, while Android Studio's CPU profiler limits inspection to a single process.
If you invoke the tests from Android Studio or using the Gradle command line, the trace files are automatically copied from device to host. These are written on the host machine at:
project_root/module/build/outputs/connected_android_test_additional_output/debugAndroidTest/connected/device_id/TrivialStartupBenchmark_startup[mode=COLD]_iter002.perfetto-trace
Once you have the trace file on your host system, you can open it in Android Studio with File > Open in the menu. This shows the profiler tool view shown in the previous section.
Configuration errors
If the app is misconfigured (debuggable, or non-profileable), Macrobenchmark
returns an error rather than reporting an incorrect or incomplete measurement.
You can suppress these errors with the androidx.benchmark.suppressErrors
argument.
Errors are also thrown when attempting to measure an emulator, or on a low-battery device, as this may compromise core availability and clock speed.
Customize the benchmarks
The measureRepeated
function accepts various parameters that have influence on
which metrics the library collects, how your application is started and
compiled, or how many iterations the benchmark will run.
Capture the metrics
Metrics are the main type of information extracted from your benchmarks.
Available options are StartupTimingMetric
, FrameTimingMetric
and
TraceSectionMetric
. To see more information about them, check the
Capture the metrics page.
Improve trace data with custom events
It can be useful to instrument your application with custom trace events, which are seen with the rest of the trace report and can help point out problems specific to your app. To learn more about creating custom trace events, see the Define custom events guide.
CompilationMode
Macro benchmarks can specify a CompilationMode, which defines how much of the app should be pre-compiled from DEX bytecode (the bytecode format within an APK) to machine code (similar to pre-compiled C++).
By default, macro benchmarks are run with CompilationMode.DEFAULT
, which
installs a Baseline Profile (if available) on Android 7 (API level 24) and
higher. If you are using Android 6 (API level 23) or lower, the compilation mode
fully compiles the APK as default system behavior.
You can install a Baseline Profile if the target application contains both a
Baseline Profile and the ProfileInstaller
library.
On Android 7 and higher, you can customize the CompilationMode
to affect the
amount of on-device pre-compilation to mimic different levels of Ahead Of Time
(AOT) compilation or JIT caching. See CompilationMode.Full
,
CompilationMode.Partial
and CompilationMode.None
.
This functionality is built on ART compilation commands. Each benchmark clears profile data before it starts, to ensure non-interference between benchmarks.
StartupMode
To perform an activity start, you can pass a pre-defined startup mode (one of
COLD
, WARM
, or HOT
). This parameter changes how the activity
launches and the process state at the start of the test.
To learn more about the types of startup, see the Android Vitals startup documentation.
Samples
A sample project is available as part of the android/performance-samples repository on GitHub.
Provide feedback
To report issues or submit feature requests for Jetpack Macrobenchmark, see the public issue tracker.