Below are deb based instructions for deploying Kaltura Clusters.
Refer to the All-In-One Kaltura Server Installation Guide for more notes about deploying Kaltura in deb supported environments.
- Setting up the Kaltura repos
- Load Balancing
- NFS server
- MySQL Database
- Sphinx Indexing
- Front Nodes
- Batch Nodes
- Analytics
- Nginx VOD Nodes
- Live Streaming with Nginx and the RTMP module
- Upgrade Kaltura
- Platform Monitoring
- Backup and Restore
- If you see a
#
at the beginning of a line, this line should be run asroot
. - Please review the frequently answered questions document for general help before posting to the forums or issue queue.
- For a cluster install, it is very important to use the [debconf response file] (https://github.com/kaltura/platform-install-packages/blob/master/deb/kaltura_debconf_response.sh) because otherwise, the MySQL 'kaltura' passwd is auto generated by the installer. This is fine for a standalone server but for a cluster, passwd must be the same on all.
- Kaltura Inc. also provides commercial solutions and services including pro-active platform monitoring, applications, SLA, 24/7 support and professional services. If you're looking for a commercially supported video platform with integrations to commercial encoders, streaming servers, eCDN, DRM and more - Start a Free Trial of the Kaltura.com Hosted Platform or learn more about Kaltura' Commercial OnPrem Edition™. For existing RPM based users, Kaltura offers commercial upgrade options.
Kaltura requires certain ports to be open for proper operation. See the list of required open ports.
This is REQUIRED on all machines, currently Kaltura can't run properly with SELinux.
setenforce permissive
# To verify SELinux will not revert to enabled next restart:
# Edit /etc/selinux/config
# Set SELINUX=permissive
# Save /etc/selinux/config
You can run Kaltura with or without SSL (state the correct protocol and certificates during the installation).
It is recommended that you use a properly signed certificate and avoid self-signed certificates due to limitations of various browsers in properly loading websites using self-signed certificates.
You can generate a free valid cert using http://cert.startcom.org/.
To verify the validity of your certificate, you can then use SSLShoper's SSL Check Utility.
Depending on your certificate, you may also need to set the following directives in /etc/apache2/sites-enabled/zzzkaltura.ssl.conf
:
SSLCertificateChainFile
SSLCACertificateFile
To achieve proper system operation and get email notifications, account activation emails, password changes, etc. all Kaltura machines in your cluster should have a functional email server. This is also ideal for monitoring purposes.
By default Amazon Web Services (AWS) EC2 machines are blocked from sending email via port 25. For more information see this thread on AWS forums.
Two working solutions to the AWS EC2 email limitations are:
- Using SendGrid as your mail service (setting up ec2 with Sendgrid and postfix).
- Using Amazon's Simple Email Service.
On all nodes, deploy the Kaltura repo key, add the Kaltura repo and fetch the repo metadata.
When deploying on Debian Jessie [8] or Ubuntu Trusty [14.04], edit/create /etc/apt/sources.list.d/kaltura.list so that it reads:
deb [arch=amd64] http://installrepo.kaltura.org/repo/apt/debian propus main
And import the GPG key with:
# wget -O - http://installrepo.kaltura.org/repo/apt/debian/kaltura-deb-curr.gpg.key|apt-key add -
When deploying on Ubuntu Xenial [16.04] edit/create /etc/apt/sources.list.d/kaltura.list to read:
deb [arch=amd64] http://installrepo.kaltura.org/repo/apt/xenial propus main
And import the GPG key with:
# wget -O - http://installrepo.kaltura.org/repo/apt/xenial/kaltura-deb-curr-256.gpg.key|apt-key add -
The NFS is the shared network storage between all machines in the cluster. To learn more about NFS read this Wikipedia article about NFS.
# apt-get install nfs-server ntp
# mkdir -p /opt/kaltura/web
# /etc/init.d/nfs-kernel-server start
Edit /etc/exports
to have the desired settings, for example:
/opt/kaltura/web *(rw,sync,no_root_squash)
Edit /etc/idmapd.conf
and add your domain, for example:
Domain = kaltura.dev
Note that you may choose different NFS settings which is fine so long as:
- the kaltura and www-data user are both able to write to this volume
- the kaltura and www-data user are both able create files with them as owners. i.e: do not use all_squash as an option.
Then set priviliges accordingly:
if ! getent group $KALTURA_GROUP >/dev/null; then
addgroup --system --force-badname --quiet $KALTURA_GROUP --gid 7373
fi
if ! getent passwd $KALTURA_USER >/dev/null; then
adduser --system --force-badname --quiet \
--home $KALTURA_PREFIX --no-create-home \
--shell /bin/bash \
--uid 7373 \
--group $KALTURA_GROUP
usermod -c "Kaltura server" $KALTURA_USER
usermod -a -G www-data kaltura
fi
usermod -a -G kaltura www-data
chown -R kaltura.www-data /opt/kaltura/web
chmod 775 /opt/kaltura/web
exportfs -a
Before continuing, run the following test on all front and Batch machines:
# apt-get install telnet
# telnet NFS_HOST 2049
Should return something similar to:
Trying 166.78.104.118...
Connected to kalt-nfs0.
Escape character is '^]'.
# apt-get install mysql-server
The Ubuntu Xenial repos include MySQL of version 5.7 which the Kaltura Server does not currently support. Therefore, it is essential to install MySQL 5.5 instead. We recommend the use of the Percona deb packages but any MySQL 5.5 distribution should work equally well. See Installing the MySQL 5.5 Percona deb package
Once the MySQL server is up and running, install the below packages and configure your DB:
# apt-get install kaltura-postinst ntp
# /opt/kaltura/bin/kaltura-mysql-settings.sh
# mysql -uroot -pYOUR_DB_ROOT_PASS
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'YOUR_DB_ROOT_PASS' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
Note that in the statement above, MySQL is being open for access as root from ALL machines, depending on your setup, you may want to limit it further to allow only members of your Kaltura cluster. Remote root user should have an access to the mysql DB during the installation of the front and batch servers. After the Kaltura cluster installation is done, you may want to remove the root access for security reasons, it will not longer be needed for the platform to operate as it will be using the 'kaltura' user to connect this point.
Edit /etc/mysql/my.cnf and change:
bind-address = 127.0.0.1
to:
bind-address = 0.0.0.0
Or, if to anything that would allow access to all front and batch nodes to it. 0.0.0.0 will allow access from ANY host.
Then, restart MySQL:
# service mysql restart
Before continuing the deployment, run the following test on all front, Sphinx and Batch machines:
# mysql -uroot -hMYSQL_HOST -p
If the connection fails, you may have a networking issue, run:
# apt-get install telnet
# telnet MYSQL_HOST 3306
Should return something similar to:
Trying 166.78.104.118...
Connected to kalt-mysql0.
Escape character is '^]'.
If that works, then the block is at the MySQL level and not in the networking, make sure this is resolved before continuing.
Scaling MySQL is an art on it's own. There are two aspects to it: Replication (having data live in more than one MySQL server for redundancy and read scaling) and setting up read slaves.
To assist with MySQL master-slave replication, please refer to the kaltura-mysql-replication-config.sh
script.
To run the replication configuration script, note the following:
- The MySQL server you've installed during the Kaltura setup is your master.
- After completing the Kaltura setup, simply run
kaltura-mysql-replication-config.sh dbuser dbpass master_db_ip master
from the master machine - Follow the same instructions above to install every slave machine, and run the following command:
kaltura-mysql-replication-config.sh dbuser dbpass master_db_ip slave
To read more and learn about MySQL Master-Slave configuration, refer to the official MySQL documentation:
After configuring your environment MySQL replication, in order to distribute the READ load, you can also configure Kaltura to 'load-balance' MySQL reads between the master and 2 additional slave machines.
Note that you can only have one machine for writes - this is your master.
Follow these steps to 'load-balance' READ operations between the MySQL servers:
- Edit
/opt/kaltura/app/configurations/db.ini
- Find the following section, this is your MASTER (replace the upper case tokens with real values from your network hosts):
propel.connection.hostspec = MASTER_DB_HOST
propel.connection.user = kaltura
propel.connection.password = KALTURA_DB_USER_PASSWORD
propel.connection.dsn = "mysql:host=MASTER_DB_HOST;port=3306;dbname=kaltura;"
- The sections that will follow will look the same, but after the key
propel
, you'll notice the numbers 2 and 3. These are the second and third MySQL servers that will be used as SLAVES (replace the upper case tokens with real values from your network hosts):
propel2.connection.hostspec = SECOND_DB_HOST
propel2.connection.user = kaltura
propel2.connection.password = KALTURA_DB_USER_PASSWORD
propel2.connection.dsn = "mysql:host=SECOND_DB_HOST;port=3306;dbname=kaltura;"
propel3.connection.hostspec = THIRD_DB_HOST
propel3.connection.user = kaltura
propel3.connection.password = KALTURA_DB_USER_PASSWORD
propel3.connection.dsn = "mysql:host=THIRD_DB_HOST;port=3306;dbname=kaltura;"
In addition, you should also set up query cache
When query cache is enabled, the server intelligently chooses between master / slave. Anything that was not changed recently is read from slave and otherwise from master.
# apt-get install kaltura-sphinx
It is strongly recommended that you install at least 2 Sphinx nodes for redundancy.
It is recommended that Sphinx will be installed on its own dedicated machine. However, if needed, Sphinx can be coupled with a front machine in low-resources clusters.
After installing the first cluster node, obtain the auto generated file placed under /tmp/kaltura_*.ans, replace relevant values and use it for the installation of the remaining nodes.
- kaltura-db and kaltura-widgets kaltura-html5lib which are installed on the web mount only need to run on the first node.
- Before starting, make sure the balancer does not direct to the second front node since it's not yet installed.
Front in Kaltura represents the machines hosting the user-facing components, including the Kaltura API, the KMC and Admin Console, MediaSpace and all client-side widgets.
# apt-get install kaltura-postinst
# /opt/kaltura/bin/kaltura-nfs-client-config.sh <NFS host> <domain> <nobody-user> <nobody-group>
# apt-get install kaltura-front kaltura-widgets kaltura-html5lib kaltura-html5-studio kaltura-kmcng kaltura-clipapp
# . /etc/kaltura.d/system.ini
Make certain this call returs 200
# curl -I $SERVICE_URL/api_v3/index.php
Output should be similar to:
HTTP/1.1 200 OK
Date: Sat, 14 Mar 2015 17:59:40 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.3
X-Kaltura: cached-dispatcher,cache_v3-baf38b7adced7cbac99d06b983aaf654,0.00048708915710449
Access-Control-Allow-Origin: *
Expires: Sun, 19 Nov 2000 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding
X-Me: $SERVICE_URL
Connection: close
Content-Type: text/xml
# apt-get install kaltura-db
Front in Kaltura represents the machines hosting the user-facing components, including the Kaltura API, the KMC and Admin Console, MediaSpace and all client-side widgets.
# apt-get install kaltura-postinst
# /opt/kaltura/bin/kaltura-nfs-client-config.sh <NFS host> <domain> <nobody-user> <nobody-group>
# apt-get install kaltura-front kaltura-html5lib kaltura-html5-studio kaltura-kmcng kaltura-clipapp
# /opt/kaltura/bin/kaltura-front-config.sh
NOTE: you can now configure the balancer to have the node in its pull.
Batch in Kaltura represents the machines running all async operations. To learn more, read: Introduction to Kaltura Batch Processes.
It is strongly recommended that you install at least 2 batch nodes for redundancy.
# apt-get update
# apt-get install kaltura-postinst
# /opt/kaltura/bin/kaltura-nfs-client-config.sh <NFS host> <domain> <nobody-user> <nobody-group>
# apt-get install kaltura-batch
Adding more batch machines is simple and easy! Due to the distributed architecture of batches in Kaltura, batches are independently registering themselves against the Kaltura cluster, and independently assume jobs from the queue.
In order to scale your system batch capacity, simply install new bacth machines in the cluster.
When running the kaltura-batch-config.sh
installer on the batch machine, the installer replaces the config tokens and sets a uniq ID per batch. Then seamlessly, the batch registers against the DB and starts taking available jobs.
The DWH is Kaltura's Analytics server.
# apt-get update
# apt-get install kaltura-dwh kaltura-postinst
# /opt/kaltura/bin/kaltura-nfs-client-config.sh <NFS host> <domain> <nobody-user> <nobody-group>
This is used to achieve on-the-fly repackaging of MP4 files to DASH, HDS, HLS, MSS.
For more info about its features see: https://github.com/kaltura/nginx-vod-module/
# apt-get update
# apt-get install kaltura-base kaltura-nginx
# /opt/kaltura/bin/kaltura-nfs-client-config.sh <NFS host> <domain> <nobody-user> <nobody-group>
For SSL specific configuration options, please see nginx-ssl-config.md
The default delivery profiles for DASH, HDS and HLS are defined here:
mysql> select * from delivery_profile where partner_id=0 and streamer_type in ('applehttp','mpegdash','hdnetworkmanifest');
These can be overridden for any given partner by going to Admin Console->Publishers->Profiles->Delivery profiles.
The decision as to which delivery profile is default is done according to values in /opt/kaltura/app/configurations/base.ini
; max duration in seconds
short_entries_max_duration = 300
short_entries_default_delivery_type = http
secured_default_delivery_type = http
default_delivery_type = http
By default, all are set to 'http' which means progressive download. You can change them to reflect you preferences. For example:
short_entries_default_delivery_type = http
secured_default_delivery_type = hds
default_delivery_type = hds
Would make entries shorter than 5 minutes to be delivered as progressive download, all others will be served as HDS, unless we're on iOS where HLS will be attempted.
Kaltura CE includes the kaltura-nginx package, which is compiled with the Nginx RTMP module.
Please see documentation here nginx-rtmp-live-streaming.md
A longer post about it can be found at https://blog.kaltura.com/free-and-open-live-video-streaming
If using Debian: Jessie [8] or Ubuntu: Trusty [14.04], edit /etc/apt/sources.list.d/kaltura.list so that it reads:
deb [arch=amd64] http://installrepo.kaltura.org/repo/apt/debian propus main
And import the GPG key with:
# wget -O - http://installrepo.kaltura.org/repo/apt/debian/kaltura-deb-curr.gpg.key|apt-key add -
Or, if using Ubuntu Xenial [16.04]:
deb [arch=amd64] http://installrepo.kaltura.org/repo/apt/xenial propus main
And import the GPG key with:
# wget -O - http://installrepo.kaltura.org/repo/apt/xenial/kaltura-deb-curr-256.gpg.key|apt-key add -
Then run the following commands to upgrade [this will work for all supported Debian and Ubuntu versions]:
# aptitude dist-upgrade
Then, on front machines:
# dpkg-reconfigure kaltura-base
# dpkg-reconfigure kaltura-front
And on batch machines:
# dpkg-reconfigure kaltura-base
# dpkg-reconfigure kaltura-batch
On sphinx machines:
# dpkg-reconfigure kaltura-base
# dpkg-reconfigure kaltura-sphinx
Please refer to the Setting up Kaltura platform monitoring guide.
Backup and restore is quite simple. Make sure that the following is being regularly backed up:
- MySQL dump all Kaltura DBs (
kaltura
,kaltura_sphinx_log
,kalturadw
,kalturadw_bisources
,kalturadw_ds
,kalturalog
). You can use the followingmysqldump
command:# mysqldump -h$DBHOST -u$DBUSER -p$DBPASSWD -P$DBPORT --routines --single-transaction $TABLE_SCHEMA $TABLE | gzip > $OUT/$TABLE_SCHEMA.$TABLE.sql.gz
- The
/opt/kaltura/web
directory, which includes all of the platform generated and media files. - The
/opt/kaltura/app/configurations
directory, which includes all of the platform configuration files.
Then, if needed, to restore the Kaltura server, follow these steps:
- Install the same version of Kaltura on a clean machine
- Stop all services
- Copy over the web and configurations directories
- Import the MySQL dump
- Restart all services
- Reindex Sphinx with the following commands:
# rm -f /opt/kaltura/log/sphinx/data/*
# cd /opt/kaltura/app/deployment/base/scripts/
# for i in populateSphinx*;do php $i >/tmp/$.log;done