Large Excel Files (100 Mb) - Extracting partial information #431
Labels
No Label
DBF
Dates
Defined Names
Features
Formula
HTML
Images
Infrastructure
Integration
International
ODS
Operations
Performance
PivotTables
Pro
Protection
Read Bug
SSF
SYLK
Style
Write Bug
good first issue
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: sheetjs/sheetjs#431
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hi
Can I read large excel files (around 100 MB)? I tried with the sample but browser crashes. I understand, I can read file in slices, however, I am not sure how to read information how to read data from each slice. e.g. cell values etc. Is there a good example to read large excel files on client.
I have the same issue. Hope someone can help us.
I have the same issue. Hope someone can help us.
@amerj19 @fabriziomorello @diegoles I recommend trying to use the node-based tool first to make sure that the files can be read and that the issue is due to the size. Alternatively you can email us or post a link to a file that crashes the browser.
As for the general question, the way the excel data is stored, you need to process quite a bit. The container formats themselves (zip and cfb) are so flexible that a reader must load all of the data in memory (some write tools scatter metadata throughout the file). In the XLSX case, you additionally have to read multiple sub-files to find the cell values. So there's no obvious way to reduce memory consumption without radically changing the API surface.
Combining all of these discussions about reading large files into one issue: #61