Metrics Element
Is it correct to summarize Metrics by saying it is a way to automatically generate data via manual and scheduled test runs, as well as a powerful way to visualize, exchange and compare that same data?
Randy Spong: Actually, that is possible with our Metrics Element, but it's a very small part of the technology, which we really consider to be what we call "game intelligence software." Games are bigger than they've ever been, so they contain more data, most of which is unused and inaccessible. We think all that information is the "sleeping giant" of development - what if developers were able to use that data to analyze and tune gameplay? Or track performance metrics? You get the idea. With Metrics, we give developers the tools to collect data from any source -during development and after the gold master is pressed. From there, you create reports, view the data in graphic form, share with anyone on the team, and so on. Project managers can track gameplay and performance metrics against data from the bug tracking and project management systems, from the same intuitive interface.
Can it also do that at runtime with no perceivable overhead, so you could have it running in real-time and giving you rendering statistics to debug performance for example, while comparing it with previous runs?
RS: The probe library that we ship with Metrics Element has an excellent performance profile. A developer can have literally hundreds of probes active simultaneously, with no noticeable performance hit. We've also integrated Metrics into Gamebryo, in order to give developers access to performance-related stats that can be aggregated across test runs, groups of players, or just for a specific machine.
And could you also schedule benchmark runs after you've added a feature or optimized the engine, fully automated based on a scene graph - or scripted movement, to test CPU bottlenecks? (Or is that part of Automation Element instead?)
RS: Metrics collection is an always-on tool. You can set it up to be constantly collecting and uploading data from any active system. If a developer sets his usage of Metrics probes that way, he'll be getting that actionable data for free, with no extra effort, during each test run.
Runtime Performance Analysis
The Gamebryo Run-Time performance analysis tools are said to be based on Metrics Element. Is it primarily exposing a subset of that functionality, or are they complimentary things?
RS: Gamebryo includes a performance tracking layer that functions with or without the Metrics Element. Without the Metrics Element, users can track various statistics regarding the number of objects rendered, frame timings, triangle counts, etc using on-screen graphs. Users that license both pieces of Emergent technology, however, will have immediate access to this data via the Metrics Element which provides a rich environment for aggregating and investigating all of this data.
Gamebryo also integrated with both PIX and NVPerfHUD. We want to make as many tools available to developers as possible for data collection and analysis, and every tool has its strengths. NVPerfHUD is great for figuring out why this particular shader is tanking on that particular graphics card, but it's not really built for analyzing gameplay balance, or even aggregating performance statistics across an entire compatibility lab. It just wasn't designed to do those things. Metrics is ideal for those tasks, though.
So considering that, let's think about NVPerfHUD for a second, for example, or ATI's equivalent. Obviously Metrics is going to be more useful for things like predetermined benchmark runs, but could you combine the two for runtime analysis? If so, what advantages can you think of to that method?
RS: NVPerfHUD is an awesome tool for doing performance debugging work. It has a lot of shortcomings, though. There is no way to archive all that data for trend analysis; you can't aggregate the statistics together across a broad user base, etc. This is where Metrics Element comes in. You can use NVPerfHUD for doing frame-level debugging work, but you can also log all those statistics into Metrics in order to take advantage of its automated aggregation, uploading, and large-scale analysis capabilities.
Something we can think of is using NVPerfHUD to check if the VS or the PS is bottlenecking too much, and then using Metrics and runtime analysis tools to check what objects are mostly limited, like that far away rock with 3K polygons you don't have any LODs for. Is there anything else interesting you can do there? In terms of problems you couldn't find easily or at all otherwise or just saving a bunch of time when doing some debugging?
RS: Developers are going to find new ways to put Metrics Element to work every day, which is why we built it on an incredibly flexible framework for data collection & analysis. The Dashboard user interface, which is the entry point into Metrics, is a very powerful visualization and reporting tool that allows developers to automate the discovery of interesting data. You can create and save reports that show you new information every day, such as the top 5 resource hogs from last night's build and automated test run.
Also, it mentions next-gen consoles; does that mean you could run automatic tests on, say, both your target PC platform and XBox 360 and compare the results, seeing which has to be tweaked down or up to make performance ideal, etc., and even do scheduled runs on your target console environment?
RS: Absolutely. We will support console devkits in the 1.0 release, and will be adding support for retail consoles in a future version.