Difference between revisions of "Performance Benchmark Tool Set"

From Apache OpenOffice Wiki
Jump to: navigation, search
(Web Supporting)
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Goals ==
+
== Purpose ==
* To measure performance OOo, and show interaction of all hotspots.
+
Use benchmark tools to test and trace the performance of all OOo versions according to the predefined OOo product performance diagram. With the results, we can have the comparison between a main version and its CWS version, trace the different versions' performance and monitor variation of product continuously.
* To trace chance of OOo performance.
+
* To collect and describe performance test-cases and problems in OOo.
+
  
We define the XML Based Performance Roadmap, all tools dependency the Performance Roadmap.
+
== PERFORMANCE_LOG vs RTL_LOG ==
 +
We created a new PERFORMANCE_LOG(PERFORMANCE_DIAGRAM & PERFORMANCE_BENCHMARK) macro to record the clock count or time when program was run. It uses numbers compare to string used by RTL_LOG, so it is more effective and much faster than RTL_LOG.
  
 +
Another reason we not use RTL_LOG is that we don't hope the inserted PERFORMANCE_LOG changed after they are defined and inserted, because we want the test point are the same for different versions so that compare the test result is reasonable and valuable, and it is used only for performance benchmark test and we don't suggest to use it for other purpose.
  
A sample of Performance Roadmap
+
== Benchmark Tool Set ==
 
+
<nowiki><</nowiki>PerformanceRoadmap>
+
 
+
<nowiki><</nowiki>Module name<nowiki>=</nowiki>"writer">
+
 
+
<nowiki><</nowiki>Desc>This measures all about OpenOffice.org writer's performance. All writer document types must be tested.<nowiki></</nowiki>Desc>
+
 
+
<nowiki><</nowiki>View name<nowiki>=</nowiki>"load">
+
 
+
<nowiki><</nowiki>Desc>Load documents, not consider loading from startup, start Moduleor script marco.<nowiki></</nowiki>Desc>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"loadfilter">
+
 
+
<nowiki><</nowiki>Desc>Find the suitable filter to handle the document.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"xmlload">
+
 
+
<nowiki><</nowiki>Desc>XML document parsing and form a doc object.<nowiki></</nowiki>Desc>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"meta">
+
 
+
<nowiki><</nowiki>Desc>Meta.xml parsing.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"settings">
+
 
+
<nowiki><</nowiki>Desc>Settings.xml parsing.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"styles">
+
 
+
<nowiki><</nowiki>Desc>Styles.xml parsing.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"content">
+
 
+
<nowiki><</nowiki>Desc>Content.xml parsing.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"frame">
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"createframe">
+
 
+
<nowiki><</nowiki>Desc>Create a new frame.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"findframe">
+
 
+
<nowiki><</nowiki>Desc>Find an exsisted frame.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"font">
+
 
+
<nowiki><</nowiki>Desc>Get all system supported fonts.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"menu">
+
 
+
<nowiki><</nowiki>Desc>Create default and user defined menu.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"toolbar">
+
 
+
<nowiki><</nowiki>Desc>Create default and user defined toolbars.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"layout">
+
 
+
<nowiki><</nowiki>Desc>Layout all visual elements.<nowiki></</nowiki>Desc>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"layoutmenutoolbar">
+
 
+
<nowiki><</nowiki>Desc>Layout menus and toolbars.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki><</nowiki>Action name<nowiki>=</nowiki>"layoutdoc">
+
 
+
<nowiki><</nowiki>Desc>Layout document contents.<nowiki></</nowiki>Desc>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki></</nowiki>Action>
+
 
+
<nowiki></</nowiki>View>
+
 
+
<nowiki><</nowiki>View name<nowiki>=</nowiki>"save">
+
 
+
+
 
+
<nowiki></</nowiki>View>
+
 
+
<nowiki></</nowiki>Module>
+
 
+
<nowiki></</nowiki>PerformanceRoadmap>
+
 
+
 
+
Several result will be generated by Performance Tools.
+
 
+
 
+
== Log file ==
+
For reducing time taken by benchmark code, we record time and case identification with number, that means the log file will be in a binary format.
+
 
+
There are two section in log file, the “Test Information” and “Benchmark Data”.
+
 
+
The “Test Information” is the record of current test, include version of OOo, version of OOo roadmap, the date of running test, and hardware information. The “Test Information” will generated by Benchmarker (an external tool).
+
 
+
The “Benchmark Data” is recorded by native code in soffice. And there are two kinds of data will be putted in to log.
+
 
+
One is the simple time record of trace point, the other is the memory or object data.
+
 
+
A simple time trace record structure like:
+
 
+
struct BenchmarkRecord
+
 
+
{
+
 
+
unsigned int mCaseId;
+
 
+
unsigned int mNodeNo;
+
 
+
unsigned __int64 mTick;
+
 
+
};
+
 
+
Memory and Object data tracing are supported by a series of MACRO like:
+
 
+
TRACE_PUSH_MEMORY(*pAddr, size)
+
 
+
TRACE_PUSH_OBJECT(*pObj, ObjType)
+
 
+
...
+
 
+
These function push a block of memory into log directly.
+
 
+
 
+
== Mapping Code ==
+
To map node definition of performance roadmap to native code like following sample, we need a tool named “CodeMapping” to transform roadmap into number definition.
+
 
+
 
+
A result from CodeMapping.
+
 
+
benchmarkmap.h
+
 
+
 
+
<nowiki>#ifndef</nowiki> _PERFORMANCE_ROADMAP_
+
 
+
<nowiki>#define</nowiki> _PERFORMANCE_ROADMAP_
+
 
+
 
+
<nowiki>#define</nowiki> CASE_LOAD 1
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_START 0
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_LOADFILTER 1 //Find the suitable filter to handle the document.
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_XMLLOAD 2 //XML document parsing and form a doc object.
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_META 4
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_SETTINGS 5
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_STYLES 6
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_CONTENT 7
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_FRAME 8
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_CREATEFRAME 9
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_FINDFRAME 10
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_FONT 11
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_MENU 12
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_TOOLBAR 13
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_LAYOUT 14
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_LAYOUTMENUTOOLBAR 15
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_LAYOUTDOC 16
+
 
+
<nowiki>#define</nowiki> CASE_LOAD_END 0xFFFF
+
 
+
 
+
<nowiki>#define</nowiki> CASE_SAVE 1
+
 
+
<nowiki>#define</nowiki> CASE_SAVE_START 0
+
 
+
//...
+
 
+
<nowiki>#define</nowiki> CASE_SAVE_END 0xFFFF
+
 
+
 
+
<nowiki>#endif</nowiki> //_PERFORMANCE_ROADMAP_
+
 
+
 
+
== Reports ==
+
 
For getting readable report, a tool named Tracer will load a set of benchmarker log files, and fetch records of specified cases choosed with GUI application or web page. And export multi-format reports.  
 
For getting readable report, a tool named Tracer will load a set of benchmarker log files, and fetch records of specified cases choosed with GUI application or web page. And export multi-format reports.  
  
 
+
[[Image:Perf bm toolset.jpg]]
[[Image:Perf_bm_toolset.jpg]]
+
  
 
'''CodeMapping: '''To transform the performance roadmap into native language(C++/C) for getting numerical case-id and node-id definition header files. of performance test case in roadmap.  
 
'''CodeMapping: '''To transform the performance roadmap into native language(C++/C) for getting numerical case-id and node-id definition header files. of performance test case in roadmap.  
 
  
 
'''Benchmarker''': To run standard cases defined in performance roadmap or user defined cases, and generate benchmark log(eg. Benchmarker.log in above chart).
 
'''Benchmarker''': To run standard cases defined in performance roadmap or user defined cases, and generate benchmark log(eg. Benchmarker.log in above chart).
 
  
 
'''Tracer''': To view benchmark log and interaction of OOo's hotspots in roadmap and to export report in multi-format (like .ods, .html, ...)
 
'''Tracer''': To view benchmark log and interaction of OOo's hotspots in roadmap and to export report in multi-format (like .ods, .html, ...)
  
 
== Web Supporting ==
 
== Web Supporting ==
[[Image:Perf_bm_tool_web.jpg]]
+
Show easy readable result on web. The benchmark tool can be used to fully test and verify our optimize work on OOo.
 +
[[Image:Perf bm tool web.jpg]]
 +
 
 +
== How to Use Benchmark Tool  ==
 +
After we defined the performance diagram, we create numbers for every LOG tag to be inserted, then insert the tag and build the source, at last run the main test program and get the result on web. The PERFORMANCE_LOG is quite similar with RTL_LOG.
 +
 
 +
== When you want to use, you can  ==
 +
Besides the inserted PERFORMANCE_LOG, the test program is on our server. We plan to publish the server IP and support some user login. The server get the MWS source copy and build it after PERFORMANCE_LOG inserted. It dose not support automatic test now. So if you want test which CWS after you do some change on it, you can contact us by email.
 +
 
 +
[[Category:Performance]]

Latest revision as of 09:55, 25 February 2009

Purpose

Use benchmark tools to test and trace the performance of all OOo versions according to the predefined OOo product performance diagram. With the results, we can have the comparison between a main version and its CWS version, trace the different versions' performance and monitor variation of product continuously.

PERFORMANCE_LOG vs RTL_LOG

We created a new PERFORMANCE_LOG(PERFORMANCE_DIAGRAM & PERFORMANCE_BENCHMARK) macro to record the clock count or time when program was run. It uses numbers compare to string used by RTL_LOG, so it is more effective and much faster than RTL_LOG.

Another reason we not use RTL_LOG is that we don't hope the inserted PERFORMANCE_LOG changed after they are defined and inserted, because we want the test point are the same for different versions so that compare the test result is reasonable and valuable, and it is used only for performance benchmark test and we don't suggest to use it for other purpose.

Benchmark Tool Set

For getting readable report, a tool named Tracer will load a set of benchmarker log files, and fetch records of specified cases choosed with GUI application or web page. And export multi-format reports.

Perf bm toolset.jpg

CodeMapping: To transform the performance roadmap into native language(C++/C) for getting numerical case-id and node-id definition header files. of performance test case in roadmap.

Benchmarker: To run standard cases defined in performance roadmap or user defined cases, and generate benchmark log(eg. Benchmarker.log in above chart).

Tracer: To view benchmark log and interaction of OOo's hotspots in roadmap and to export report in multi-format (like .ods, .html, ...)

Web Supporting

Show easy readable result on web. The benchmark tool can be used to fully test and verify our optimize work on OOo. Perf bm tool web.jpg

How to Use Benchmark Tool

After we defined the performance diagram, we create numbers for every LOG tag to be inserted, then insert the tag and build the source, at last run the main test program and get the result on web. The PERFORMANCE_LOG is quite similar with RTL_LOG.

When you want to use, you can

Besides the inserted PERFORMANCE_LOG, the test program is on our server. We plan to publish the server IP and support some user login. The server get the MWS source copy and build it after PERFORMANCE_LOG inserted. It dose not support automatic test now. So if you want test which CWS after you do some change on it, you can contact us by email.

Personal tools