View in English

  • 打开菜单 关闭菜单
  • Apple Developer
搜索
关闭搜索
  • Apple Developer
  • 新闻
  • 探索
  • 设计
  • 开发
  • 分发
  • 支持
  • 账户
在“”范围内搜索。

快捷链接

5 快捷链接

视频

打开菜单 关闭菜单
  • 专题
  • 相关主题
  • 所有视频
  • 关于

返回 Tech Talks

大多数浏览器和
Developer App 均支持流媒体播放。

  • 简介
  • 转写文稿
  • 代码
  • 探讨借助 METAL 计数器实现实时图形处理器性能分析

    在 macOS Big Sur 和 iOS 14 中利用 METAL 计数器 API 对图形处理器进行性能分析。这种 API 可以在运行时提供对低阶图形处理器性能分析信息的访问,这一点在之前只能通过 Xcode 和 Instruments 中的离线工具才能实现。METAL 计数器通过让你访问重要的图形处理器信息加速优化过程,帮助你对 app 的性能进行微调,以打造更快、更流畅的 app 和游戏体验。学习收集和解析这些低阶图形处理器时间戳,并利用深度信息来帮助你在 METAL 中进行性能调优。

    资源

      • 高清视频
      • 标清视频
  • 搜索此视频…

    Hi, I'm Lionel Lemarié from the GPU software team. In this session, we're going to use the Metal Counter API to get precise GPU timings at runtime.

    We will cover a few topics. We'll start with a quick intro of the Metal Counter API.

    Then we'll take a brief look at the main features of a typical live profiling HUD.

    We'll use the API step-by-step to collect the profiling information.

    And we'll conclude by looking at how this data fits into the HUD.

    So let's start with a quick intro of the API.

    The Metal Counter API is new in iOS 14. It was available in macOS Catalina and has been extended in macOS Big Sur. On iOS and macOS with Apple Silicon, it gives you access to stage boundary timings. That is, precise start and end times for vertex, fragment and compute passes. On Intel and AMD GPUs, you can get draw boundary timings, precise GPU timestamps even within individual passes.

    Next, let's do a quick recap of the main features of a live profiling HUD.

    You'd use a live HUD to track your app's performance at runtime. It can help you find problem areas that need to be investigated off-line in Xcode or Instruments.

    You can also use them to tune your resolution and quality settings per device, for instance.

    As an example, a typical live HUD may look something like this.

    Frame times, as a scrolling line chart, to help you catch frame hitches, stats about your memory usage, resolution and more and the timeline of CPU events. And today, we'll add GPU events to the timeline.

    Typically, for your CPU markers, you would instrument your most important functions using mach_absolute_time to get the start and end timestamps.

    A good start with CPU markers is to put them around your command buffer work-- a start marker when you create it and an end marker when you commit it. That gives you an overview of the CPU costs of your rendering.

    Now we want to add equivalent GPU markers. You may be familiar with the GPU events on the Metal System Trace timeline, as we're seeing here. On iOS in this example, it shows the different stages of the tile-based deferred rendering architecture. Here we see the vertex and fragment processing work.

    To do this, the GPU firmware logs the start and end of the vertex stage.

    Then it logs the start and end of the fragment stage.

    And the Metal System Trace displays the stages on the timeline. For immediate mode GPUs, you can log events for groups of draw calls. For example, you would log the start of rendering object one before all of its draw calls and then log the start of rendering object two. And finally, the end of object two. You then have a timeline that shows precisely how long the GPU spends rendering each object.

    Now let's use the Metal Counter API to achieve that.

    We'll start by checking which counter sampling modes are available. We need to know whether the GPU should record at the stage boundary, for vertex, fragments or compute stages, or if it should record at the draw boundary, as we've just seen.

    We simply use the supportsCounterSampling API to check if the current device supports stage boundary, for a TBDR GPU, or draw boundary for unlimited mode GPU.

    Next we check which counter sets are available on the device. Counter sets include timestamps, stage utilization and pipeline statistics. For our GPU markers, we need the counter set that collects timestamps. So we enumerate all available counter sets on the device and choose the one for timestamps.

    Once you have the right counter set, inspect it to ensure that it does have timestamps, as some devices may not support them.

    We're done with the initial setup. Now let's see what's needed at runtime during each frame. There are just four easy steps. First we'll create a sample buffer with a size, storage mode and the counter set we just looked up. Then we'll add the sample buffer to the pass descriptor. Note that it means that you need at least one buffer per pass. Next, if we're using sampling at draw boundary, we'll add sampling commands at important points. Finally, in the completion handler, we'll resolve the counters. And we'll talk about aligning CPU and GPU timestamps if needed. Let's check out each step in detail.

    First, we create a sample buffer using a descriptor. We specify the maximum number of samples it can hold, so it has the right size. We'll use six samples here, but you'll typically use more than that.

    Then we set the storage mode. Shared mode is great here. It's not a lot of data, and it makes accessing the counters extra easy.

    We specify the counter set to use, the one for timestamps. Finally, we create the sample buffer. So far, so good. Now we have a buffer for six samples.

    As an example, let's use it in a render encoder.

    You would do it the same for compute and blit encoders. For this, you use the sample buffer attachment from the render pass descriptor. If we are using stage boundary, this is where we set it up. We specify the start and end of the vertex stage. We're putting them at index zero and one in the sample buffer. That's how the GPU knows where to write each sample and how you know where to retrieve them from. Same for the start and end of the fragment stage.

    Finally, we pointed to the sample buffer that we just created to store those samples.

    To sample a draw boundary, you add sample commands at key points of your command stream.

    The first obvious placement for these is before and after all draw calls. So after you've created a new encoder, you immediately add a sample command. We'll put it at index four, since we've already reserved the first four slots for stage boundary samples.

    After all your draw calls, right before ending your encoders, you add a sample command at index five.

    So the GPU will record a timestamp before and after all the work for that encoder. You can add more sample commands between groups of draw calls to mark important milestones. Just make sure your sample buffer is allocated with enough space in advance.

    Speaking of which, we've allocated a buffer big enough for both stage boundary and draw boundary sampling. You can easily optimize it by allocating just enough for stage or draw boundary sampling in isolation, since they are mutually exclusive.

    Right. The GPU has been instructed to sample timestamp counters at stage or draw boundary. Next, we wait for the rendering to complete, and in the handler, we collect the data.

    Remember that we created a sample buffer per encoder. So in the common buffer completion handler, we may need to parse multiple sample buffers.

    For each one, we resolve the counters. That translates different specific data into the unified Metal struct that's super easy to parse. Simply point to it and use the CounterResult struct. As we specified that vertexStart should be at index zero, we read it directly from there. Then we'll do the same for all other samples.

    Note that some error checking is needed here. It's possible that the GPU failed to fill the sample buffer so you need to check that the result step collected the expected number of samples and that each sample is valid.

    The GPU will use a predefined error value if it can't get a specific timestamp.

    On iOS and Apple Silicon devices, the GPU timestamps are aligned to mach_absolute_time so you can directly compare them to CPU timestamps.

    On Intel and AMD GPUs, an extra step is needed. They need to be translated from a vendor specific time domain. This is because depending on how busy the GPU is, how much power it consumes and how hot it's running, its clock frequency is constantly adjusted over time, which affects the timestamps.

    To address this on immediate mode GPUs, you use the sample timestamp's API to query matching CPU and GPU timestamps at a given time.

    You do it at regular intervals to avoid drifting and keep precise correlation over time. Then you do a simple linear interpolation of the samples collected.

    As an example, you may call sample timestamps inside the common buffer completion handler so you get one correlation per frame.

    Let's say you query CPU and GPU timestamps at t0. And then at the next frame, you query them at t1.

    All GPU counters from the sample buffers can now be scaled and offset back into CPU domain.

    And that's all we need. So let's look at how it could all be displayed together.

    We're seeing the CPU markers. We captured them with mach_absolute_time. We're seeing vertex, fragment and compute stages all overlapping and aligned with CPU activity. You can even collect the mach_absolute_time inside the presented handler to align all the markers to actual glass-to-glass frames and precisely display all the events within each frame.

    Using this HUD, you get a great view of whether you're CPU or GPU bound, your dependencies and sync points, the breakdown of vertex, fragment and compute work and how they affect each other.

    All of that, live, right there inside your app.

    There are a few things you can watch out for. Don't update the HUD too often. Just like an FPS counter, live data can be hard to read if it's constantly changing. You can collect the timestamps every frame but only update the markers on-screen once per second, for example. It makes it significantly easier to follow.

    Secondly, GPU activity depends on its clock rate. Seeing a high GPU occupancy does not necessarily mean it is maxed out.

    As the system only uses as much power as needed, it will balance power and performance.

    As a consequence, you might see the GPU being 80% busy in the HUD. But if it is running at half the max clock rate, then it would actually be running at 40% of the peak performance and have plenty of headroom.

    And as always, you should handle errors, but you should also watch out for inconsistencies.

    For example, counters can overflow, which would cause a new value to be smaller than the previous one and may trigger a negative duration.

    Or putting your device to sleep or hibernate while sampling counters may also cause large outliers. Those are rare events, and you should skip them gracefully to avoid glitches in your display or logs.

    And so to recap, we've just gone through the steps to use the Metal Counter API to collect GPU timestamps. To do that, we used the device supportsCounterSampling method to find out which sampling modes are supported. We enumerated the counter sets to find the set with the GPU timestamps. We created a new sample buffer using its descriptor and used it in the render command encoder. You will want to do the same with a blit and compute encoder too. We added specific sample commands before and after all the draws. You can add them between draws, dispatches and blits too to get sub past timings.

    We resolved the counters into CPU memory. And finally, we realigned them if needed.

    And with that, you have all the data needed for a powerful, live GPU profiling HUD to display on top of your app.

    And this API gives you access to more than just the GPU timestamps. You can get summarized per stage information, which is easier to process if you are not drawing the events on the timeline. And importantly, you can get some in-depth statistics, such as the number of invocations for vertex and fragment shaders and compute kernels and much more.

    There's enough to explore in the Metal Counter API, and it gives you access to a lot of information to profile your GPU performance at runtime.

    That's it for this session. Thanks for watching.

    • 3:38 - Checking for available Metal counters

      if (@available(macOS 11.0, iOS 14.0, *))
      {
          _supportsStageBoundary = [_device supportsCounterSampling:MTLCounterSamplingPointAtStageBoundary];
          _supportsDrawBoundary  = [_device supportsCounterSampling:MTLCounterSamplingPointAtDrawBoundary];
      }
    • 3:52 - Counter sets

      [_device.counterSets enumerateObjectsUsingBlock:^(id<MTLCounterSet> nonnull obj,
                                                        NSUInteger                idx,
                                                        BOOL * nonnull            stop) {
             if ([[obj name] isEqualToString:MTLCommonCounterSetTimestamp])
                  _counterSetTimestamp = obj;
      }];
    • 5:05 - Sampling counters on Apple GPUs


      // When setting up the render pass descriptor
      
      if (_supportsStageBoundary || _supportsDrawBoundary)
      {
          MTLCounterSampleBufferDescriptor *desc = [MTLCounterSampleBufferDescriptor new];
      
          desc.sampleCount = 6; // Number of samples to store 
          desc.storageMode = MTLStorageModeShared;
          desc.label       = @"Live Profiling HUD Metal counter sample buffer";
          desc.counterSet  = _counterSetTimestamp;
      
          id<MTLCounterSampleBuffer> sampleBuffer =
                                     [_device newCounterSampleBufferWithDescriptor:desc error:nil];
      
          MTLRenderPassSampleBufferAttachmentDescriptor *sampleBufferDesc =
                                        renderPassDescriptor.sampleBufferAttachments[0];
      
          if (_supportsStageBoundary)
          {
              sampleBufferDesc.startOfVertexSampleIndex   = 0;
              sampleBufferDesc.endOfVertexSampleIndex     = 1;
              sampleBufferDesc.startOfFragmentSampleIndex = 2;
              sampleBufferDesc.endOfFragmentSampleIndex   = 3;
          }
      
          sampleBufferDesc.sampleBuffer = sampleBuffer;
      }
    • 6:23 - Sampling counters at draw boundary

      // After creating a new render command encoder
      [renderCommandEncoder sampleCountersInBuffer:sampleBuffer
                                     atSampleIndex:4
                                       withBarrier:NO];
      
      // All draw calls
      [renderCommandEncoder sampleCountersInBuffer:sampleBuffer
                                     atSampleIndex:5
                                       withBarrier:NO];
      
      // End encoding
    • 7:28 - Collecting timestamps

      // For each tracked sampleBuffer, resolve the counters
      NSData *data = [sampleBuffer resolveCounterRange:NSMakeRange(0, 6)];
      
      MTLCounterResultTimestamp *sample = (MTLCounterResultTimestamp *)[data bytes];
      
      // And simply access the timestamps
      if (_supportsStageBoundary)
      {
          double vertexStart = sample[0].timestamp / (double)NSEC_PER_SEC;
      }
      
      // Check for errors
      if (sample[0].timestamp == MTLCounterErrorValue) 
      {
        // Handle error
      }
    • 9:05 - Aligning timestamps

      // On immediate mode GPU
      MTLTimestamp cpuTimestamp;
      MTLTimestamp gpuTimestamp;
      [_device sampleTimestamps:&cpuTimestamp gpuTimestamp:&gpuTimestamp];
      
      // Do a linear interpolation between correlated timestamps
      gpu_ns = cpu_t0 + (cpu_t1 - cpu_t0) * (gpu_timestamp - gpu_t0) / (gpu_t1 - gpu_t0);

Developer Footer

  • 视频
  • Tech Talks
  • 探讨借助 METAL 计数器实现实时图形处理器性能分析
  • 打开菜单 关闭菜单
    • iOS
    • iPadOS
    • macOS
    • Apple tvOS
    • visionOS
    • watchOS
    打开菜单 关闭菜单
    • Swift
    • SwiftUI
    • Swift Playground
    • TestFlight
    • Xcode
    • Xcode Cloud
    • SF Symbols
    打开菜单 关闭菜单
    • 辅助功能
    • 配件
    • App 扩展
    • App Store
    • 音频与视频 (英文)
    • 增强现实
    • 设计
    • 分发
    • 教育
    • 字体 (英文)
    • 游戏
    • 健康与健身
    • App 内购买项目
    • 本地化
    • 地图与位置
    • 机器学习
    • 开源资源 (英文)
    • 安全性
    • Safari 浏览器与网页 (英文)
    打开菜单 关闭菜单
    • 完整文档 (英文)
    • 部分主题文档 (简体中文)
    • 教程
    • 下载 (英文)
    • 论坛 (英文)
    • 视频
    打开菜单 关闭菜单
    • 支持文档
    • 联系我们
    • 错误报告
    • 系统状态 (英文)
    打开菜单 关闭菜单
    • Apple 开发者
    • App Store Connect
    • 证书、标识符和描述文件 (英文)
    • 反馈助理
    打开菜单 关闭菜单
    • Apple Developer Program
    • Apple Developer Enterprise Program
    • App Store Small Business Program
    • MFi Program (英文)
    • News Partner Program (英文)
    • Video Partner Program (英文)
    • 安全赏金计划 (英文)
    • Security Research Device Program (英文)
    打开菜单 关闭菜单
    • 与 Apple 会面交流
    • Apple Developer Center
    • App Store 大奖 (英文)
    • Apple 设计大奖
    • Apple Developer Academies (英文)
    • WWDC
    获取 Apple Developer App。
    版权所有 © 2025 Apple Inc. 保留所有权利。
    使用条款 隐私政策 协议和准则