Difference between revisions of "Calc/Performance/misc"

From Apache OpenOffice Wiki
Jump to: navigation, search
(With integration of Chart2 the interfacing recalculation problems are gone; if there are remaining issues with Chart performance they are not part of Calc.)
(use SUBPAGENAME in category sort key for reusability)
 
(2 intermediate revisions by the same user not shown)
Line 13: Line 13:
 
== threaded calculation ==
 
== threaded calculation ==
  
Ideally to scale to hyper-threaded machines we need to crunch a workbook's dependency graph & then thread the calcuation.
+
Ideally to scale to hyper-threaded machines we need to crunch a workbook's dependency graph and then thread the calculation.
  
 
Similarly the process of constructing a Data Pilot cache, and (subsequently) collating that data is one that is susceptible to threading.
 
Similarly the process of constructing a Data Pilot cache, and (subsequently) collating that data is one that is susceptible to threading.
Line 19: Line 19:
  
  
[[Category:Calc|Performance/misc]]
+
[[Category:Calc|Performance/{{SUBPAGENAME}}]]
 
[[Category:To-Do]]
 
[[Category:To-Do]]
 
[[Category:Performance]]
 
[[Category:Performance]]

Latest revision as of 15:24, 6 March 2009

Miscellaneous performance optimization opportunities that don't have an own entry under Calc/To-Dos/Performance/... yet.

In-sheet objects

With a relatively modest number of in-sheet objects (which are favorite tools of complex spreadsheet creators) things become horribly slow: 30secs to load a small file with ~no data / macros & only 240 list boxes sample document.

The sheet objects need idly creating in the svx layer; also there is a floating patch to improve VCL's control management performance - wherein some of the problems lie.

Large / complex pivot sheets

The existing Data Pilot implementation doesn't have a shared normalized form of the data. (ie. with each field reduced to an ordinal, for O(1) lookup). We should implement just such a Data Pilot cache using a representation compatible with the PivotTable cache, and populatable from that on import.

threaded calculation

Ideally to scale to hyper-threaded machines we need to crunch a workbook's dependency graph and then thread the calculation.

Similarly the process of constructing a Data Pilot cache, and (subsequently) collating that data is one that is susceptible to threading.

Personal tools