Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: cache parsed CSV file data and replace Array with Set to improve code gen performance #168

Merged
merged 3 commits into from
Nov 22, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 16 additions & 9 deletions src/Helpers.js
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ let nextFixedDelayVarSeq = 1
let nextLevelVarSeq = 1
// next sequence number for generated aux variable names
let nextAuxVarSeq = 1
// parsed csv data cache
let csvData = new Map()
// string table for web apps
export let strings = []

Expand Down Expand Up @@ -321,16 +323,21 @@ export let readCsv = (pathname, delimiter = ',') => {
// Read the CSV file at the pathname and parse it with the given delimiter.
// Return an array of rows that are each an array of columns.
// If there is a header row, it is returned as the first row.
let result = null
const CSV_PARSE_OPTS = {
delimiter,
columns: false,
trim: true,
skip_empty_lines: true,
skip_lines_with_empty_values: true
// Cache parsed files to support multiple reads from different equations.
let csv = csvData.get(pathname)
if (csv == null) {
const CSV_PARSE_OPTS = {
delimiter,
columns: false,
trim: true,
skip_empty_lines: true,
skip_lines_with_empty_values: true
}
let data = B.read(pathname)
csv = parseCsv(data, CSV_PARSE_OPTS)
csvData.set(pathname, csv)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we need to worry about caching many large files in memory being an issue for models that read many large CSV files (but that only make use of parts of them). But I suspect the answer is "probably not", and if we do ever encounter such a beast, we can worry about it at that time.

}
let data = B.read(pathname)
return parseCsv(data, CSV_PARSE_OPTS)
return csv
}
// Convert the var name and subscript names to canonical form separately.
export let canonicalVensimName = vname => {
Expand Down
6 changes: 3 additions & 3 deletions src/Model.js
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@ function removeUnusedVariables(spec) {

// Walk the reference tree rooted at the given var and record it (and anything
// that it references) as being "used".
const referencedRefIds = []
const referencedRefIds = new Set()
const recordRefsOfVariable = v => {
// If this variable is subscripted, we need to record all subscript variants;
// `refIdsWithName` will return those. We also need to record all variables
Expand All @@ -326,8 +326,8 @@ function removeUnusedVariables(spec) {
refIds = refIds.concat(v.references)
refIds = refIds.concat(v.initReferences)
for (const refId of refIds) {
if (!referencedRefIds.includes(refId)) {
referencedRefIds.push(refId)
if (!referencedRefIds.has(refId)) {
referencedRefIds.add(refId)
const refVar = varWithRefId(refId)
recordUsedVariable(refVar)
recordRefsOfVariable(refVar)
Expand Down