The trace view in the CPU Profiler provides several ways to view information from recorded traces.
For method traces and function traces, you can view the Call Chart directly in the Threads timeline and the Flame Chart, Top Down, Bottom Up, and Events tabs from the Analysis pane. For system traces, you can view the Trace Events directly in the Threads timeline, and the Flame Chart, Top Down, Bottom Up, and Events tabs from the Analysis pane.
Mouse and keyboard shortcuts are available for easier navigation of Call Charts or Trace Events.
Inspect traces using the Call Chart
The Call Chart provides a graphical representation of a method trace or function trace, where the period and timing of a call is represented on the horizontal axis, and its callees are shown along the vertical axis. Calls to system APIs are shown in orange, calls to your app’s own methods are shown in green, and calls to third-party APIs (including Java language APIs) are shown in blue. Figure 4 shows an example call chart and illustrates the concept of self time, children time, and total time for a given method or function. You can learn more about these concepts in the section on how to inspect traces using Top Down and Bottom Up.
Tip: To jump the source code of a method or function, right-click it and select Jump to Source. This works from any of the Analysis pane tabs.
Inspect traces using the Flame Chart tab
The Flame Chart tab provides an inverted call chart that aggregates identical call stacks. That is, identical methods or functions that share the same sequence of callers are collected and represented as one longer bar in a flame chart (rather than displaying them as multiple shorter bars, as shown in a call chart). This makes it easier to see which methods or functions consume the most time. However, this also means that the horizontal axis doesn't represent a timeline; instead, it indicates the relative amount of time each method or function takes to execute.
To help illustrate this concept, consider the call chart in figure 5. Note that method D makes multiple calls to B (B1, B2, and B3), and some of those calls to B make a call to C (C1 and C3).
Because B1, B2, and B3 share the same sequence of callers (A → D → B) they are aggregated, as shown in figure 6. Similarly, C1 and C3 are aggregated because they share the same sequence of callers (A → D → B → C); note that C2 is not included because it has a different sequence of callers (A → D → C).
The aggregated calls are used to create the flame chart, as shown in figure 7. Note that, for any given call in a flame chart, the callees that consume the most CPU time appear first.
Inspect traces using Top Down and Bottom Up
The Top Down tab displays a list of calls in which expanding a method or function node displays its callees. Figure 8 shows a top down graph for the call chart in figure 4. Each arrow in the graph points from a caller to a callee.
As shown in figure 8, expanding the node for method A in the Top Down tab displays its callees, methods B and D. After that, expanding the node for method D exposes its callees, methods B and C, and so on. Similar to the Flame chart tab, the top down tree aggregates trace information for identical methods that share the same call stack. That is, the Flame chart tab provides a graphical representation of the Top down tab.
The Top Down tab provides the following information to help describe CPU time spent on each call (times are also represented as a percentage of the thread’s total time over the selected range):
- Self: the time the method or function call spent executing its own code and not that of its callees, as illustrated in figure 4 for method D.
- Children: the time the method or function call spent executing its callees and not its own code, as illustrated in figure 4 for method D.
- Total: the sum of the method’s Self and Children time. This represents the total time the app spent executing a call, as illustrated in figure 4 for method D.
The Bottom Up tab displays a list of calls in which expanding a function or method’s node displays its callers. Using the example trace shown in figure 8, figure 9 provides a bottom up tree for method C. Opening the node for method C in the bottom up tree displays each of its unique callers, methods B and D. Note that, although B calls C twice, B appears only once when expanding the node for method C in the bottom up tree. After that, expanding the node for B displays its caller, methods A and D.
The Bottom Up tab is useful for sorting methods or functions by those that consume the most (or least) CPU time. You can inspect each node to determine which callers spend the most CPU time invoking those methods or functions. Compared to the top down tree, timing info for each method or function in a bottom up tree is in reference to the method at the top of each tree (top node). CPU time is also represented as a percentage of the thread’s total time during that recording. The following table helps explain how to interpret timing information for the top node and its callers (sub-nodes).
|Method or function at the top of the bottom up tree (top node)||Represents the total time the method or function spent executing its own code and not that of its callees. Compared to the top down tree, this timing information represents a sum of all calls to this method or function over the duration of the recording.||Represents the total time the method or function spent executing its callees and not its own code. Compared to the top down tree, this timing information represents the sum of all calls to this method or function's callees over the duration of the recording.||The sum of the self time and children time.|
|Callers (sub-nodes)||Represents the total self time of the callee when being called by the caller. Using the bottom up tree in figure 9 as an example, the self time for method B would equal the sum of the self times for each execution of method C when called by B.||Represents the total children time of the callee when being invoked by the caller. Using the bottom up tree in figure 9 as an example, the children time for method B would equal the sum of the children times for each execution of method C when called by B.||The sum of the self time and children time.|
Note: For a given recording, Android Studio stops collecting new data when the profiler reaches the file size limit (however, this does not stop the recording). This typically happens much more quickly when performing instrumented traces because this type of tracing collects more data in a shorter time, compared to a sampled trace. If you extend the inspection time into a period of the recording that occurred after reaching the limit, timing data in the trace pane does not change (because no new data is available). Additionally, the trace pane displays NaN for timing information when you select only the portion of a recording that has no data available.
Inspect traces using the Events table
The Events table lists all calls in the currently selected thread. You can sort them by clicking on the column headers. By selecting a row in the table, you can navigate the timeline to the start and end time of the selected call. This allows you to accurately locate events on the timeline.
Inspect system traces
When inspecting a system trace, you can examine Trace Events in the Threads timeline to view the details of the events occurring on each thread. Hover your mouse pointer over an event to see the name of the event and the time spent in each state. Click an event to see more information in the Analysis pane.
Inspect system traces: CPU cores
In addition to CPU scheduling data, system traces also include CPU frequency by core. This shows the amount of activity on each core and may give you an idea of which ones are the "big" or "little" cores in modern mobile processors.
The CPU Cores pane (as shown in figure 11) shows thread activity scheduled on every core. Hover your mouse pointer over a thread activity to see which thread this core is running on at that particular time.
For additional information on inspecting system trace information, see the
Investigate UI performance problems
section of the
Inspect system traces: Frame rendering timeline
You can inspect how long it takes your app to render each frame on the main
RenderThread to investigate bottlenecks that cause UI jank and low
To see frame rendering data, record a trace using a configuration that allows you to Trace System Calls. After recording the trace, look for info about each frame under the Frames timeline in the Display section, as shown in figure 12.
The tracks shown in the Display section are:
- Frames: when a frame is being drawn. Long frames (greater than 16 ms) are colored red.
- SurfaceFlinger: when SurfaceFlinger processes the frame buffers. SurfaceFlinger is a system process responsible for sending buffers to display.
- VSYNC: a signal that synchronizes the display pipeline. A frame that misses a VSYNC will introduce extra input latency. This is especially important on high-refresh-rate displays.
- BufferQueue: how many frame buffers are queued up, waiting for SurfaceFlinger to consume. For apps deployed to devices running Android 9 or higher, this track shows the buffer count of the app's surface BufferQueue (0, 1 or 2). It can help you understand the state of image buffers as they move between the Android graphics components. For example, a value of 2 means the app is currently triple-buffered, which may result in extra input latency.
Inspect system traces: Process Memory (RSS)
For apps deployed to devices running Android 9 or higher, the Process Memory (RSS) section shows the amount of physical memory currently in use by the app.
This is the total amount of physical memory currently in use by your process. On Unix-based systems, this is known as the "Resident Set Size", and is the combination of all the memory used by anonymous allocations, file mappings, and shared memory allocations.
For Windows developers, Resident Set Size is analogous to the Working Set Size.
This counter tracks how much physical memory is currently used by the
process's normal memory allocations. These are allocations which are
anonymous (not backed by a specific file) and private (not shared). In most
applications, these are made up of heap allocations (with
new) and stack memory. When swapped out from physical memory, these
allocations are written to the system swap file.
This counter tracks the amount of physical memory the process is using for file mappings – that is, memory mapped from files into a region of memory by the memory manager.
This counter tracks how much physical memory is being used to share memory between this process and other processes in the system.
This counter tracks how much of the swap-file the process is using. Specifically, it is the amount of allocated memory that is currently swapped-out.