parse_dom_table function performance #1626
Labels
No Label
DBF
Dates
Defined Names
Features
Formula
HTML
Images
Infrastructure
Integration
International
ODS
Operations
Performance
PivotTables
Pro
Protection
Read Bug
SSF
SYLK
Style
Write Bug
good first issue
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: sheetjs/sheetjs#1626
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
https://github.com/SheetJS/js-xlsx/blob/master/xlsx.js#L19061
The
merges
for loop will became extremely large and slow while handling large table, i had test this with 204 cols * 250 rows table, without optimizews[!merges]
result a huge array, and almost of item is merging single cell itself which is useless.before optimize, in my test case, export function excute 18.6s, and after, only excute 4.67s.
and in my customer's client, they were exporting a 193 cols * 1277 rows table, export function excute 6mins, after optimize, only excute 15s.
code change
febac23e8e
well, my code change above was wrong, and i figured out that
merges
for loop's logic then made another change, decrease time complexity from O(merges.length * n) to O(merges.length).for export 200 cols * 10000 rows table, parse_dom_table function don't out-of-memory any more, but jszip utf8ToBytes function got out-of-memory.
@ThomasChan thanks for looking into this, and feel free to submit a PR.
There's definitely room for improvement. The weird loop is done that way to address a case like:
The first cell in the second row should be located at F2, but to determine that you need to look at the A1:C2 merge first then the D1:E2 merge.
The implementation was designed expecting only a small number of merges. IF you have many, then the approach is extremely slow.
Given the parse order, it will always be sorted by starting row then by starting column. To reduce it to a single walk through the merge array, you might be able to sort by ending row then by starting column (sorting the array with a custom sort function). Then you'd keep track of a starting index into the array (elements before that point could never affect the result, so you can skip them).
So that we have a sense for the performance, can you share a sample table that you are trying to convert?
Thanks for reply, i had submit a PR, and sample table is nothing special to any html table, you can create a BIG table like i said above, 200 cols * 10000 rows, then use xlsx to export like