-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error 500 from /api/playout: Unable to cue Offline playout file. #5
Comments
Hello. I assume in the rundown you see your clip marked as "REMOTE"? In that case the problem is the asset is not properly copied to the playout storage. There may be several reasons for that: 1. Storage Playout storage is set to id 3, could you confirm that when you start the worker, there's a log message similar to this?
Based on your configuration the shared storage is your caspar directory. So inside it, you should see a fie ".nebula_root" - that's a file nebula uses to determine whether the storage is writable. 2. Playout action The action XML contains If you don't see "playout" action listed in the dialog, make sure the action is included in from nebula.settings.models import ActionSettings
def load_cfg(filename: str) -> str:
return open(f"/settings/actions/{filename}.xml").read()
ACTIONS = [
ActionSettings(
id=1,
name="proxy",
type="conv",
settings=load_cfg("proxy"),
),
ActionSettings(
id=2,
name="playout",
type="conv",
settings=load_cfg("playout"),
),
] 3. Target location as soon you are able to run the "send to playout" action manually, check the "Jobs" page or worker logs for errors. If everything works, in the media directory on the playout server you should see a file 4. PSM If the file is there, but nebula still marks it as offline, it is possible the PSM storage didn't catch it properly. Keep in mind that PSM only handle files scheduled between "now" and "now + 24hour". If your event is outside that range you may need to change its start time (that is something I should really improve). It may also be a reason why the conversion job didn't started automatically. I hope that helps. Please let me know if any of the solution worked for you, any PR for improving the tutorials in this repo would be highly appreciated. |
It could be the storage problem. Unfortunately, I can't really follow the storage point, but it's already late at our place :D.
When I try to send the video to playout via Send to I get the following error in the console:
|
This is what i usually do: Assuming the playout drive is Configure the storage in Nebula In from nebula.settings.models import StorageSettings
STORAGES = [
# additional storages may go there, implicit "local" storages
# mounted using "volumes" section in the docker-compose.yml
# don't need to be defined, only samba shares
StorageSettings(
id=3,
name="playout",
protocol="samba",
path="//playoutserver/playout",
options={
"login": "nebula",
"password": "nebula",
"samba_version": "3.0",
},
),
] Then create Set Apply the nebula settings ( You may also try accessing worker shell using |
The command "make setup" gives me the following output:
The storages.py was not yet available for me, I created it once with the content from your last comment, but entered my data for the login and the path to the share folder.
Maybe the following will help you, because I forgot to add this file: docker-compose.yml:
|
You need to update https://github.com/nebulabroadcast/nebula-tutorial/blob/main/doc/remote-storages.md |
I actually did that yesterday and then encountered 2 problems: If I follow the first point:
Then my config would look like this:
by point 2:
unfortunately I don't know where to add it. No matter where I add it, I always get an error. |
This is 1:1 dockerfile i use on my dev machine. I hope that helps. Keep in mind i have both both production storage and playout storages mapped using samba, so nebula handles them - if you're expanding from the original docker compose, you may want to keep the first storage "bind-mounted" and use nebula-managed samba only for the playout server. volumes:
db: {}
services:
postgres:
image: postgres
environment:
- "POSTGRES_USER=nebula"
- "POSTGRES_PASSWORD=nebula"
- "POSTGRES_DB=nebula"
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "db:/var/lib/postgresql/data"
restart: unless-stopped
redis:
image: redis:alpine
restart: unless-stopped
backend:
image: nebulabroadcast/nebula-server:dev
privileged: true
ports:
- "4455:80"
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "./plugins:/plugins"
- "./settings:/settings"
environment:
- "NEBULA_LOG_LEVEL=trace"
depends_on:
- redis
- postgres
worker:
image: nebulabroadcast/nebula-worker:dev
hostname: worker
privileged: true
ports:
- "6251:6251/udp"
- "6252:6252/udp"
depends_on:
- backend |
I have taken over the parts once, this is what my config looks like now:
When I start the Docker composer now I get the following error:
|
IDK. this comes from docker, not Nebula.... maybe this? https://askubuntu.com/questions/1508129/docker-compose-giving-containerconfig-errors-after-update-today |
The server starts again with the docker v2 syntax.
However, it does not load the 2nd MP4 under Incomming. Since the current file that is displayed is corrupt, I have tried to insert the 2nd file so that I can test whether it now works. But unfortunately it is not displayed. The server output now looks as follows:
|
|
I have now made some progress: firefly now loads the files from the correct folder (share folder).
Send to Proxy no longer works either, it says that the write permissions are missing, but the question is whether I really need it, since I have playout. |
mxf container is very picky regarding the essence. if you need to playout arbitrary media files, you may need to use a different profile for sending to playout (for example use mov container, or force transcoding). i don't know your intended workflow. it is indeed possible not to create them at all - they are crucial for reviewing and trimming clips for example, but if you are sure about the content on your production storage, it is completely fine to disable proxy creation. In that case you may also want to set |
Ok, I'll try to explain a bit what I have in mind: I want to set up a 24h stream for Twitch via rtmp. At that time I had finished the constellation with the older nebula, which also worked but lost the configurations due to lack of backups and an HDD crash. The constellation with me: In the CasparCG folder of the Windows subsystem is the folder media. This has been shared for the network. I want to be able to put the videos in there, which Nebula then pulls and processes. This is in Now, however, the CasparCG should play back what is currently being played in the appointment scheduler.
Edit: By the way, I am using CasparCG version 2.4.0. |
Now I have made it so far, CasparCG reacts to Nebula. The last problem I have is probably due to Nebula and that is that what is being played now or what will be played next is not displayed. Although double clicking on one of the rundown files sends the video to casparcg. I have been able to eliminate all other problems, many of which are settings that were not recognizable at first glance. Your advice with the 2 docker syntaxes also helped me a lot. but maybe you know how to fix the last error :D |
Great. You're really close! The last step - according to your previous log is to make OSC connection working (This is why nebula does not receive information about what's playing). Please refer to https://github.com/nebulabroadcast/nebula-tutorial/blob/main/doc/casparcg.md and check:
Keep in mind that after changing the configuration in docker-compose.yml, the container has to be re-created using |
That's a bit trickier, channels.py from nebula.settings.models import PlayoutChannelSettings, AcceptModel
scheduler_accepts = AcceptModel(folders=[1, 2])
rundown_accepts = AcceptModel(folders=[1, 3, 4, 5, 6, 7, 8, 9, 10])
channel1 = PlayoutChannelSettings(
id=1,
name="Channel 1",
fps=25.0,
plugins=[],
solvers=[],
day_start=(7, 0),
scheduler_accepts=scheduler_accepts,
rundown_accepts=rundown_accepts,
rundown_columns=[],
send_action=2,
engine="casparcg",
allow_remote=False,
controller_host="worker",
controller_port=42101,
playout_storage=3,
playout_dir="media",
playout_container="mov",
config={
"caspar_host": "192.168.178.59",
"caspar_port": 5250,
"caspar_osc_port": 6251,
"caspar_channel": 1,
"caspar_feed_layer": 10,
},
)
# Configure second channel similarly
channel2 = PlayoutChannelSettings(
id=2,
name="Channel 2",
fps=25.0,
plugins=[],
solvers=[],
day_start=(7, 0),
scheduler_accepts=scheduler_accepts,
rundown_accepts=rundown_accepts,
rundown_columns=[],
send_action=2,
engine="casparcg",
allow_remote=False,
controller_host="worker",
controller_port=42102,
playout_storage=3,
playout_dir="media",
playout_container="mov",
config={
"caspar_host": "192.168.178.59",
"caspar_port": 5250,
"caspar_osc_port": 6252,
"caspar_channel": 2,
"caspar_feed_layer": 10,
},
)
CHANNELS = [channel1, channel2] docker-composer.yml volumes:
db: {}
services:
postgres:
image: postgres
environment:
- "POSTGRES_USER=nebula"
- "POSTGRES_PASSWORD=nebula"
- "POSTGRES_DB=nebula"
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "db:/var/lib/postgresql/data"
restart: unless-stopped
redis:
image: redis:alpine
restart: unless-stopped
backend:
image: nebulabroadcast/nebula-server:latest
privileged: true
ports:
- "4455:80"
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "./plugins:/plugins"
- "./settings:/settings"
- "./storage:/mnt/nebula_01"
environment:
- "NEBULA_LOG_LEVEL=trace"
depends_on:
- redis
- postgres
worker:
image: nebulabroadcast/nebula-worker:latest
hostname: worker
privileged: true
ports:
- "6251:6251/udp"
- "6252:6252/udp"
volumes:
- "./storage:/mnt/nebula_01"
depends_on:
- backend And the CasparCG Config: <?xml version="1.0" encoding="utf-8"?>
<configuration>
<paths>
<media-path>media/media</media-path>
<log-path>log/</log-path>
<data-path>data/</data-path>
<template-path>template/</template-path>
<font-path>font/</font-path>
</paths>
<lock-clear-phrase>secret</lock-clear-phrase>
<channels>
<channel>
<video-mode>1080p5000</video-mode>
<consumers>
<screen/>
<system-audio/>
</consumers>
</channel>
</channels>
<controllers>
<tcp>
<port>5250</port>
<protocol>AMCP</protocol>
</tcp>
</controllers>
<amcp>
<media-server>
<host>localhost</host>
<port>8000</port>
</media-server>
</amcp>
</configuration>
<osc>
<predefined-clients>
<predefined-client>
<address>192.168.178.56</address>
<port>6251</port>
</predefined-client>
<predefined-client>
<address>192.168.178.56</address>
<port>6252</port>
</predefined-client>
</predefined-clients>
</osc> the firewall activations in Windows are also set and in Ubuntu the two ports are active and the firewall is also activated for the ports. It still doesn't want to work somehow. I also used the command after the configuration changes you wrote in the last comment. Could it be due to the casparcg version? |
LGTM.... this is hard to debug, honestly. I'd suspect firewall/host IP, but hard to say. The "Waiting for OSC" log message shows when the OSC connection is not established. |
I think I have the error but I don't know how to fix it yet. CasparCG has one CasparCG.exe and one scanner.exe. How is CasparCG working for you? Because I can't get any further with this topic. Something is wrong with the OSC and I don't quite understand what the problem is. The Scanner.exe is started and works without errors. |
Ok I have now monitored the traffic with Wireshark and have discovered the following. I have adapted Nebula at least with channel 1 to this port and lo and behold, it works. I will put the configs together and make them available to you in case someone has the same constellation. |
Hello everyone,
the tutorial works really well and nebula was installed quite quickly. However, I am currently failing with the casparcg instructions. I have done everything as described in the tutorial, but it does not work. This means that when I double-click a video in Firefly under Rundown, the error occurs:
Error 500 from http://192.168.178.56/api/playout Unable to cue Offline playout file.
Nebula Log:
services.py
channel.py:
in the actions folder I have created a new xml called playout.xml with the following content
I have now installed casparcg on my Windows computer (same network) and the configuration looks like this:
The output of Casparcg is as follows:
The text was updated successfully, but these errors were encountered: