You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #289 (comment), @zikolach pointed out that some Scenarios may have parameters with more than 1 million elements, that might exceed the size limit of the Microsoft Excel format or the Python tools used to read/write it.
Scenario.{read,write}_excel() and the associated format should be:
Tested for their behaviour with such 'large' data, and
Extended to support round-trip write and read of such data.
The text was updated successfully, but these errors were encountered:
In general would be good to allow splitting items data into multiple sheets as we can reach rows limit. We could avoid to have separate sheet with item name-type mapping by embedding type into sheet name, e.g. type|name|n or similar convention, where type is item type, name - item name and n is a number of sheet for "large" items.
That sounds like a good proposal.
Users may have old Excel spreadsheets sitting around and will expect them to continue working. So we would need to add a second code path for these alternate formats; support both for some time; and then deprecate & eventually remove the current one. We would need an explicit decision about when the current format will no longer be supported.
In #289 (comment), @zikolach pointed out that some Scenarios may have parameters with more than 1 million elements, that might exceed the size limit of the Microsoft Excel format or the Python tools used to read/write it.
Scenario.{read,write}_excel()
and the associated format should be:The text was updated successfully, but these errors were encountered: