Skip to content

Commit

Permalink
Added multi-monitor splitting support
Browse files Browse the repository at this point in the history
Lua/BIMG outputs only. Palettes are generated per monitor. BIMG playback
requires the multi-monitor extension, only available in bimg-player.lua.
Also disabled Floyd-Steinberg on GPUs (again) - seems to fail on newer GPUs.
  • Loading branch information
MCJack123 committed Jul 19, 2023
1 parent 601ec0e commit edea580
Show file tree
Hide file tree
Showing 5 changed files with 214 additions and 109 deletions.
61 changes: 33 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,11 @@
Converts images and videos into a format that can be displayed in ComputerCraft. Spiritual successor to [juroku](https://github.com/tmpim/juroku), which is hard to build and isn't as flexible.

## Installation
### Windows
Download the latest release from the [releases tab](https://github.com/MCJack123/sanjuuni/releases). This includes the built binary, the Lua player programs, plus all required libraries.

### Linux
#### AUR
#### Arch Linux (AUR)

sanjuuni is available in the Arch User Repository; use your favorite AUR helper to install it:
```sh
Expand Down Expand Up @@ -62,38 +65,40 @@ usage: ./sanjuuni [options] -i <input> [-o <output> | -s <port> | -w <port> | -u
sanjuuni converts images and videos into a format that can be displayed in
ComputerCraft.
-ifile, --input=file Input image or video
-Sfile, --subtitle=file ASS-formatted subtitle file to add to the video
-opath, --output=path Output file path
-l, --lua Output a Lua script file (default for images; only does one frame)
-n, --nfp Output an NFP format image for use in paint (changes proportions!)
-r, --raw Output a rawmode-based image/video file (default for videos)
-b, --blit-image Output a blit image (BIMG) format image/animation file
-3, --32vid Output a 32vid format binary video file with compression + audio
-sport, --http=port Serve an HTTP server that has each frame split up + a player program
-wport, --websocket=port Serve a WebSocket that sends the image/video with audio
-uurl, --websocket-client=url Connect to a WebSocket server to send image/video with audio
-T, --streamed For servers, encode data on-the-fly instead of doing it ahead of time (saves memory at the cost of speed and only one client)
-p, --default-palette Use the default CC palette instead of generating an optimized one
-Ppalette, --palette=palette Use a custom palette instead of generating one, or lock certain colors
-t, --threshold Use thresholding instead of dithering
-O, --ordered Use ordered dithering
-L, --lab-color Use CIELAB color space for higher quality color conversion
-8, --octree Use octree for higher quality color conversion (slower)
-k, --kmeans Use k-means for highest quality color conversion (slowest)
-cmode, --compression=mode Compression type for 32vid videos; available modes: none|lzw|deflate|custom
-B, --binary Output blit image files in a more-compressed binary format (requires opening the file in binary mode)
-d, --dfpwm Use DFPWM compression on audio
-m, --mute Remove audio from output
-Wsize, --width=size Resize the image to the specified width
-Hsize, --height=size Resize the image to the specified height
-h, --help Show this help
-ifile, --input=file Input image or video
-Sfile, --subtitle=file ASS-formatted subtitle file to add to the video
-opath, --output=path Output file path
-l, --lua Output a Lua script file (default for images; only does one frame)
-n, --nfp Output an NFP format image for use in paint (changes proportions!)
-r, --raw Output a rawmode-based image/video file (default for videos)
-b, --blit-image Output a blit image (BIMG) format image/animation file
-3, --32vid Output a 32vid format binary video file with compression + audio
-sport, --http=port Serve an HTTP server that has each frame split up + a player program
-wport, --websocket=port Serve a WebSocket that sends the image/video with audio
-uurl, --websocket-client=url Connect to a WebSocket server to send image/video with audio
-T, --streamed For servers, encode data on-the-fly instead of doing it ahead of time (saves memory at the cost of speed and only one client)
-p, --default-palette Use the default CC palette instead of generating an optimized one
-Ppalette, --palette=palette Use a custom palette instead of generating one, or lock certain colors
-t, --threshold Use thresholding instead of dithering
-O, --ordered Use ordered dithering
-L, --lab-color Use CIELAB color space for higher quality color conversion
-8, --octree Use octree for higher quality color conversion (slower)
-k, --kmeans Use k-means for highest quality color conversion (slowest)
-cmode, --compression=mode Compression type for 32vid videos; available modes: none|lzw|deflate|custom
-B, --binary Output blit image files in a more-compressed binary format (requires opening the file in binary mode)
-d, --dfpwm Use DFPWM compression on audio
-m, --mute Remove audio from output
-Wsize, --width=size Resize the image to the specified width
-Hsize, --height=size Resize the image to the specified height
-M[WxH[@S]], --monitor-size[=WxH[@S]] Split the image into multiple parts for large monitors (images only)
-h, --help Show this help
```

Custom palettes are specified as a list of 16 comma-separated 6-digit hex codes, optionally preceeded by `#`. Blank entries can be left empty or filled with an `X`. Example: `#FFFFFF,X,X,X,X,X,X,#999999,777777,X,X,X,X,X,X,#000000`
Custom palettes are specified as a list of 16 comma-separated 6-digit hex codes, optionally preceeded by `#`. Blank entries can be left empty or filled with an `X`. Example: `#FFFFFF,X,X,X,X,X,X,#999999,777777,,,,,,,#000000`

### Playback programs
* `32vid-player.lua` plays back 32vid video/audio files from the disk. Simply give it the file name and it will decode and play the file.
* `bimg-player.lua` displays BIMG images or animations. Simply give it the file name and it will decode and play the file.
* `raw-player.lua` plays back raw video files from the disk. Simply give it the file name and it will decode and play the file.
* `websocket-player.lua` plays a stream from a sanjuuni WebSocket server. Simply give it the WebSocket URL and it will play the stream, with audio if a speaker is attached.

Expand Down
66 changes: 57 additions & 9 deletions bimg-player.lua
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@ local file, err = fs.open(shell.resolve(path), "rb")
if not file then error("Could not open file: " .. err) end
local img = textutils.unserialize(file.readAll())
file.close()
term.clear()
for _, frame in ipairs(img) do
local function drawFrame(frame, term)
for y, row in ipairs(frame) do
term.setCursorPos(1, y)
term.blit(table.unpack(row))
Expand All @@ -15,11 +14,60 @@ for _, frame in ipairs(img) do
if type(c) == "table" then term.setPaletteColor(2^i, table.unpack(c))
else term.setPaletteColor(2^i, c) end
end end
if img.animation then sleep(frame.duration or img.secondsPerFrame or 0.05)
else read() break end
if img.multiMonitor then term.setTextScale(img.multiMonitor.scale or 0.5) end
end
if img.multiMonitor then
local width, height = img.multiMonitor.width, img.multiMonitor.height
local monitors = settings.get('sanjuuni.multimonitor')
if not monitors or #monitors < height or #monitors[1] < width then
term.clear()
term.setCursorPos(1, 1)
print('This image needs monitors to be calibrated before being displayed. Please right-click each monitor in order, from the top left corner to the bottom right corner, going right first, then down.')
monitors = {}
local names = {}
for y = 1, height do
monitors[y] = {}
for x = 1, width do
local _, oy = term.getCursorPos()
for ly = 1, height do
term.setCursorPos(3, oy + ly - 1)
term.clearLine()
for lx = 1, width do term.blit('\x8F ', (lx == x and ly == y) and '00' or '77', 'ff') end
end
term.setCursorPos(3, oy + height)
term.write('(' .. x .. ', ' .. y .. ')')
term.setCursorPos(1, oy)
repeat
local _, name = os.pullEvent('monitor_touch')
monitors[y][x] = name
until not names[name]
names[monitors[y][x]] = true
sleep(0.25)
end
end
settings.set('sanjuuni.multimonitor', monitors)
settings.save()
print('Calibration complete. Settings have been saved for future use.')
end
for i = 1, #img, width * height do
for y = 1, height do
for x = 1, width do
drawFrame(img[i + (y-1) * width + x-1], peripheral.wrap(monitors[y][x]))
end
end
if img.animation then sleep(img[i].duration or img.secondsPerFrame or 0.05)
else read() break end
end
else
term.clear()
for _, frame in ipairs(img) do
drawFrame(frame, term)
if img.animation then sleep(frame.duration or img.secondsPerFrame or 0.05)
else read() break end
end
term.setBackgroundColor(colors.black)
term.setTextColor(colors.white)
term.clear()
term.setCursorPos(1, 1)
for i = 0, 15 do term.setPaletteColor(2^i, term.nativePaletteColor(2^i)) end
end
term.setBackgroundColor(colors.black)
term.setTextColor(colors.white)
term.clear()
term.setCursorPos(1, 1)
for i = 0, 15 do term.setPaletteColor(2^i, term.nativePaletteColor(2^i)) end
2 changes: 1 addition & 1 deletion src/generator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -514,5 +514,5 @@ std::string make32vid_cmp(const uchar * characters, const uchar * colors, const
}

std::string makeLuaFile(const uchar * characters, const uchar * colors, const std::vector<Vec3b>& palette, int width, int height) {
return "local image, palette = " + makeTable(characters, colors, palette, width, height) + "\n\nterm.clear()\nfor i = 0, #palette do term.setPaletteColor(2^i, table.unpack(palette[i])) end\nfor y, r in ipairs(image) do\n term.setCursorPos(1, y)\n term.blit(table.unpack(r))\nend\nread()\nfor i = 0, 15 do term.setPaletteColor(2^i, term.nativePaletteColor(2^i)) end\nterm.setBackgroundColor(colors.black)\nterm.setTextColor(colors.white)\nterm.setCursorPos(1, 1)\nterm.clear()\n";
return "-- Generated with sanjuuni\n-- https://sanjuuni.madefor.cc\nlocal image, palette = " + makeTable(characters, colors, palette, width, height) + "\n\nterm.clear()\nfor i = 0, #palette do term.setPaletteColor(2^i, table.unpack(palette[i])) end\nfor y, r in ipairs(image) do\n term.setCursorPos(1, y)\n term.blit(table.unpack(r))\nend\nread()\nfor i = 0, 15 do term.setPaletteColor(2^i, term.nativePaletteColor(2^i)) end\nterm.setBackgroundColor(colors.black)\nterm.setTextColor(colors.white)\nterm.setCursorPos(1, 1)\nterm.clear()\n";
}
2 changes: 1 addition & 1 deletion src/quantize.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -548,7 +548,7 @@ Mat thresholdImage(Mat& image, const std::vector<Vec3b>& palette, OpenCL::Device
Mat ditherImage(Mat& image, const std::vector<Vec3b>& palette, OpenCL::Device * device) {
Mat retval(image.width, image.height, device);
#ifdef HAS_OPENCL
if (device != NULL) {
if (device != NULL && false) {
ulong progress_size = image.height / WORKGROUP_SIZE * WORKGROUP_SIZE + (image.height % WORKGROUP_SIZE ? WORKGROUP_SIZE : 0);
OpenCL::Memory<uchar> palette_mem(*device, palette.size(), 3);
for (int i = 0; i < palette.size(); i++) {palette_mem[i*3] = palette[i][0]; palette_mem[i*3+1] = palette[i][1]; palette_mem[i*3+2] = palette[i][2];}
Expand Down
Loading

0 comments on commit edea580

Please sign in to comment.