Skip to content

Commit

Permalink
Merge remote-tracking branch 'roblox/master' into upgrade
Browse files Browse the repository at this point in the history
  • Loading branch information
vocksel committed Oct 12, 2024
2 parents 6a38f84 + 0d5efe3 commit cd1d2df
Show file tree
Hide file tree
Showing 52 changed files with 823 additions and 351 deletions.
7 changes: 7 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,13 @@

* fix warning when multiple configuration are found ([#8](https://github.com/jsdotlua/jest-lua/pull/8))
* fix error message when no tests are found ([#7](https://github.com/jsdotlua/jest-lua/pull/7))
*
## 3.10.0 (2024-10-02)
* :sparkles: Added a fallback to use `loadstring` instead of `loadmodule` in lower privileged contexts ([#392](https://github.com/Roblox/jest-roblox-internal/pull/392))
* :sparkles: Added `redactStackTrace` option to improve stability to snapshots that contain stacktraces ([#401](https://github.com/Roblox/jest-roblox-internal/pull/401))
* :hammer_and_wrench: Add more helpful error message when requiring `JestGlobals` outside test environment ([#405](https://github.com/Roblox/jest-roblox-internal/pull/405))
* :hammer_and_wrench: Error when trying to load a nonexistant `PrettyFormat` plugin ([#407](https://github.com/Roblox/jest-roblox-internal/pull/407))
* :hammer_and_wrench: Stabilize `RobloxInstance` serialization tests ([#408](https://github.com/Roblox/jest-roblox-internal/pull/408))

## 3.9.1 (2024-08-02)
* :bug: Fix a type analysis error in `JestRuntime` ([#403](https://github.com/Roblox/jest-roblox-internal/pull/403))
Expand Down
7 changes: 5 additions & 2 deletions bin/ci.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,13 @@ robloxdev-cli run --load.model jest.project.json \
EnableSignalBehavior=true DebugForceDeferredSignalBehavior=true MaxDeferReentrancyDepth=15 \
--lua.globals=UPDATESNAPSHOT=false --lua.globals=CI=false --load.asRobloxScript --fs.readwrite="$(pwd)"

echo "Running low privilege tests"
# echo "Running low privilege tests"
robloxdev-cli run --load.model jest.project.json \
--run bin/spec.lua --testService.errorExitCode=1 \
--fastFlags.overrides EnableLoadModule=false --load.asRobloxScript

# Uncomment this to update snapshots
# robloxdev-cli run --load.model jest.project.json --run bin/spec.lua --testService.errorExitCode=1 --fastFlags.allOnLuau --fastFlags.overrides EnableLoadModule=true DebugDisableOptimizedBytecode=true --lua.globals=UPDATESNAPSHOT=true --lua.globals=CI=true --load.asRobloxScript --fs.readwrite="$(pwd)"
# robloxdev-cli run --load.model jest.project.json \
# --run bin/spec.lua --testService.errorExitCode=1 \
# --fastFlags.allOnLuau --fastFlags.overrides EnableLoadModule=true DebugDisableOptimizedBytecode=true \
# --lua.globals=UPDATESNAPSHOT=true --lua.globals=CI=true --load.asRobloxScript --fs.readwrite="$(pwd)"
43 changes: 43 additions & 0 deletions docs/docs/CLI.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,49 @@ Lists all test files that Jest Lua will run given the arguments, and exits.

Disables stack trace in test results output.

### `oldFunctionSpying` \[boolean]
![Roblox only](/img/roblox-only.svg)

Changes how [`jest.spyOn()`](jest-object#jestspyonobject-methodname) overwrites
methods in the spied object, making it behave like older versions of Jest.

* When `oldFunctionSpying = true`, it will overwrite the spied method with a
*mock object*. (old behaviour)
* When `oldFunctionSpying = false`, it will overwrite the spied method with a
*regular Lua function*. (new behaviour)

Regardless of the value of `oldFunctionSpying`, the `spyOn()` function will
always return a mock object.

```lua
-- when `oldFunctionSpying = false` (old behaviour)

local guineaPig = {
foo = function() end
}

local mockObj = jest.spyOn(guineaPig, "foo")
mockObj.mockReturnValue(25)

print(typeof(guineaPig.foo)) --> table
print(typeof(mockObj)) --> table
print(guineaPig.foo == mockObj) --> true
```

```lua
-- when `oldFunctionSpying = true` (new behaviour)

local guineaPig = {
foo = function() end
}

local mockObj = jest.spyOn(guineaPig, "foo")

print(typeof(guineaPig.foo)) --> function
print(typeof(mockObj)) --> table
print(guineaPig.foo == mockObj) --> false
```

### `passWithNoTests` \[boolean]
[![Jest](/img/jestjs.svg)](https://jest-archive-august-2023.netlify.app/docs/27.x/cli#--passwithnotests) ![Aligned](/img/aligned.svg)

Expand Down
30 changes: 30 additions & 0 deletions docs/docs/Configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,36 @@ TextLabel {
]=]
```

`pretty-format` further supports redacting stack traces from error logs via the
`RedactStackTraces` plugin. By default, this only attempts to redact the
contents of `Error` objects, but can be configured to search through strings
with the `redactStackTracesInStrings` boolean (default: `false`).

For example, this lets you save snapshots that contain stack traces, without
those stack traces depending on the actual code structure of the repository.
This reduces the chance of the snapshot test breaking due to unrelated changes
in far-away parts of the code.

```lua title="jest.config.lua"
return {
testMatch = { "**/*.spec" },
snapshotFormat = { redactStackTracesInStrings = true }
}
```
```lua title="test.spec.lua"
test('print stack trace', function()
expect(debug.traceback()).toMatchSnapshot()
end)
```
```lua title="test.spec.snap.lua"
exports[ [=[print stack trace 1]=] ] = [=[
Redacted.Stack.Trace:1337 function epicDuck
Redacted.Stack.Trace:1337 function epicDuck
Redacted.Stack.Trace:1337 function epicDuck
Redacted.Stack.Trace:1337 function epicDuck
]=]
```


### `snapshotSerializers` \[array<serializer>]
[![Jest](/img/jestjs.svg)](https://jest-archive-august-2023.netlify.app/docs/27.x/configuration#snapshotserializers-arraystring) ![API Change](/img/apichange.svg)
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/ExpectAPI.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ local expect = require("@DevPackages/JestGlobals").expect

To use regular expressions in matchers that support it, you need to add [LuauRegExp](https://github.com/Roblox/luau-regexp) as a dependency in your `rotriever.toml` and require it in your code.
```yaml title="rotriever.toml"
RegExp = "github.com/roblox/luau-regexp@0.2.0"
RegExp = "0.2.2"
```

```lua
Expand All @@ -30,7 +30,7 @@ local RegExp = require("@Packages/RegExp")

To use Promises in your tests, add [roblox-lua-promise](https://github.com/Roblox/roblox-lua-promise) as a dependency in your `rotriever.toml`
```yaml
Promise = "github.com/evaera/roblox-lua-promise@3.3.0"
Promise = "3.3.0"
```

### Error
Expand Down
155 changes: 82 additions & 73 deletions docs/docs/JestBenchmarkAPI.md
Original file line number Diff line number Diff line change
@@ -1,65 +1,71 @@
---
id: jest-benchmark
title: JestBenchmark
title: Jest Benchmark
---

![Roblox only](/img/roblox-only.svg)

Benchmarks are useful tools for gating performance in CI, optimizing code, and capturing performance gains. JestBenchmark aims to make it easier to write benchmarks in the Luau language.
Benchmarks are useful tools for gating performance in CI, optimizing code, and capturing performance gains. `JestBenchmark` aims to make it easier to write benchmarks in the Luau language.

JestBenchmark must be imported from the JestBenchmark Package
`JestBenchmark` must be added as a dev dependency to your `rotriever.toml` and imported.
```yaml title="rotriever.toml"
JestBenchmark = "3.9.1"
```

```lua
local JestBenchmark = require("@DevPackages/JestBenchmark")
local JestBenchmark = require(Packages.Dev.JestBenchmark)
local benchmark = JestBenchmark.benchmark
local CustomReporters = JestBenchmark.CustomReporters
```

### benchmark
![Roblox only](/img/roblox-only.svg)
## Methods

import TOCInline from "@theme/TOCInline";

<TOCInline toc={toc.slice(1)}/>

### `benchmark(name, fn, timeout)`

The `benchmark` function is a wrapper around `test` that provides automatic profiling for FPS and benchmark running time. Similar to `test`, it exposes `benchmark.only` and `benchmark.skip` to focus and skip tests, respectively.

```lua
describe("Home Page Benchmarks", function()
benchmark("First Render Performance", function(Profiler, reporters)
render(React.createElement(HomePage))
benchmark("First Render Performance", function(Profiler, reporters)
render(React.createElement(HomePage))

local GameCarousel = screen.getByText("Continue"):expect()
local GameCarousel = screen.getByText("Continue"):expect()

expect(GameCarousel).toBeDefined()
end)
expect(GameCarousel).toBeDefined()
end)
end)
```

### Reporter
![Roblox only](/img/roblox-only.svg)
## Reporter

The `Reporter` object collects and aggregates data generated during a benchmark. For example, you may have an FPS reporter that collects the delta time between each frame in a benchmark and calculates the average FPS over the benchmark.

### initializeReporter
![Roblox only](/img/roblox-only.svg)
### `initializeReporter(metricName, fn)`

`initializeReporter` accepts a metric name and collector function as arguments and returns a Reporter object. The metric name is the label given to the data collected. The collector function accepts a list of values and reduces them to a single value.
`initializeReporter` accepts a metric name and collector function as arguments and returns a `Reporter` object. The metric name is the label given to the data collected. The collector function accepts a list of values and reduces them to a single value.

```lua
local function average(nums: { number }): num
if #nums == 0 then
return 0
end
if #nums == 0 then
return 0
end

local sum = 0
for _, v in nums do
sum += v
end
local sum = 0
for _, v in nums do
sum += v
end

return sum / #nums
return sum / #nums
end

local averageReporter = initializeReporter("average", average)
```

### Reporter.start()
![Roblox only](/img/roblox-only.svg)
### `Reporter.start(sectionName)`

A reporting segment is initialized with `Reporter.start(sectionName: string)`. All values reported within the segment are collected as a group and reduced to a single value in `Reporter.finish`. The segment is labeled with the `sectionName` argument. Reporter segments can be nested or can run sequentially. All Reporter segments must be concluded by calling `Reporter.stop`

Expand All @@ -83,109 +89,112 @@ local sectionNames, sectionValues = averageReporter.finish()
-- sectionValues: {2, 6, 4}
```

### Reporter.stop()
![Roblox only](/img/roblox-only.svg)
### `Reporter.stop()`

When `Reporter.stop` is called, the reporter section at the top of the stack is popped off, and a section of reported values are marked for collection at the end of benchmark. No collection is done during the benchmark runtime, since this could reduce performance.

### Reporter.report
![Roblox only](/img/roblox-only.svg)
### `Reporter.report(number)`

When `Reporter.report(value: T)` is called, a value is added to the report queue. The values passed to report are reduced when `reporter.finish` is called.

### Reporter.finish
![Roblox only](/img/roblox-only.svg)
### `Reporter.finish()`

`Reporter.finish` should be called at the end of the benchmark runtime. It returns a list of section names and a list of section values generated according to the collectorFn. Values are returned in order of completion.
`Reporter.finish` should be called at the end of the benchmark runtime. It returns a list of section names and a list of section values generated according to the `collectorFn`. Values are returned in order of completion.

### Profiler
![Roblox only](/img/roblox-only.svg)
## Profiler

The `Profiler` object controls a set of reporters and reports data generated during a benchmark. The Profiler is initialized with the `initializeProfiler` function. A profiling segment is started by calling `Profiler.start` and stopped by calling `Profiler.stop`. These segments can be called sequentially or can be nested. Results are generated by calling `Profiler.finish`.

### initializeProfiler
![Roblox only](/img/roblox-only.svg)
### `initializeProfiler(reporters, fn, prefix?)`

`intializeProfiler` accepts a list of reporters and an outputFn as arguments and returns a Profiler object.
`intializeProfiler` accepts a list of reporters and an outputFn as arguments and returns a `Profiler` object. An optional `prefix` string can be appended to all the section names.

```lua
local reporters = {
initializeReporter("average", average),
initializeReporter("sectionTime", sectionTime),
initializeReporter("average", average),
initializeReporter("sectionTime", sectionTime),
}

local outputFn = function(metricName: string, value: any)
print(`{metricName}, {value}`)
print(`{metricName}, {value}`)
end

local profiler = initializeProfiler(reporters, outputFn)
```

### Profiler.start
![Roblox only](/img/roblox-only.svg)
### `Profiler.start(sectionName)`

When `Profiler.start(sectionName: string)` is called, reporter.start is called for each reporter in the reporters list. Each Profiler section must be concluded with a `Profiler.stop()` call.
When `Profiler.start(sectionName: string)` is called, `reporter.start` is called for each reporter in the reporters list. Each Profiler section must be concluded with a `Profiler.stop()` call.

```lua
Profiler.start("section1")

Profiler.stop()
```

### Profiler.stop
![Roblox only](/img/roblox-only.svg)
### `Profiler.stop()`

When `Profiler.stop()` is called, reporter.stop is called for each reporter in the reporters list. Calling `Profiler.stop` without first calling `Profiler.start` will result in an error.

### Profiler.finish
![Roblox only](/img/roblox-only.svg)
### `Profiler.finish()`

When `Profiler.finish` is called, reporter.finish is called for each reporter in the reporters list. The results of each finish call is then printed by the outputFn passed to the Profiler.

### CustomReporters
![Roblox only](/img/roblox-only.svg)
## CustomReporters

By default, the `benchmark` function has two reporters attached: FPS and SectionTime. However, you may want to add custom reporters, perhaps to track Rodux action dispatches, time to interactive, or React re-renders. To enable this, the CustomReporters object exports `useCustomReporters`, which allows the user to add additional reporters to the Profiler. These reporters are passed in a key-value table as the second argument in the provided benchmark function. This should be used in combination with `useDefaultReporters`, which removes all custom reporters from the Profiler.
By default, the `benchmark` function has two reporters attached: `FPSReporter` and `SectionTimeReporter`. However, you may want to add custom reporters, perhaps to track Rodux action dispatches, time to interactive, or React re-renders. To enable this, the CustomReporters object exports `useCustomReporters`, which allows the user to add additional reporters to the Profiler. These reporters are passed in a key-value table as the second argument in the provided benchmark function. This should be used in combination with `useDefaultReporters`, which removes all custom reporters from the Profiler.

```lua
local MetricLogger = JestBenchmarks.CustomReporters

beforeEach(function()
CustomReporters.useCustomReporters({
sum = initializeReporter("sum", function(nums)
local sum = 0
for _, v in nums do
sum += v
end
return sum
end)
})
CustomReporters.useCustomReporters({
sum = initializeReporter("sum", function(nums)
local sum = 0
for _, v in nums do
sum += v
end
return sum
end)
})
end)

benchmark("Total renders", function(Profiler, reporters)
local renderCount = getRenderCount()
reporters.sum.report(renderCount)
local renderCount = getRenderCount()
reporters.sum.report(renderCount)
end)

afterEach(function()
CustomReporters.useDefaultReporters()
CustomReporters.useDefaultReporters()
end)
```

### MetricLogger
![Roblox only](/img/roblox-only.svg)
## MetricLogger

By default, benchmarks output directly to stdout. This may not be desirable in all cases. For example, you may want to output results to a BindableEvent or a file stream. The MetricLogger object exposes a `useCustomMetricLogger` function, which allows the user to override the default output function. This should be used in combination with `useDefaultMetricLogger`, which resets the output function to the default value

For example, to encode the benchmark metrics as a JSON and write the output to a `json` file for each test file, you may configure the following custom metric logger in a [`setupFilesAfterEnv`](configuration#setupfilesafterenv-arraymodulescript):
```lua
local MetricLogger = JestBenchmarks.MetricLogger

local benchmarks

beforeAll(function()
benchmarks = {}
end)

beforeEach(function()
MetricLogger.useCustomMetricLogger(function(metricName: string, value: any)
print(HttpService:JSONEncode({
metric = metricName,
value = value
}))
end)
MetricLogger.useCustomMetricLogger(function(metricName: string, value: any)
table.insert(benchmarks, HttpService:JSONEncode({
metric = metricName,
value = value
}))
end)
end)

afterEach(function()
MetricLogger.useDefaultMetricLogger()
afterAll(function()
local benchmarkFile = tostring(expect.getState().testPath) .. ".json"
FileSystemService:WriteFile(benchmarkFile, benchmarks)
MetricLogger.useDefaultMetricLogger()
end)
```
Loading

0 comments on commit cd1d2df

Please sign in to comment.