From 37214e19c27ad10b24a9456d0e850abd8a6a51b5 Mon Sep 17 00:00:00 2001
From: ci-bot Seafile is an open source cloud storage system for file sync, share and document collaboration. SeaDoc is an extension of Seafile that providing a lightweight online collaborative document feature. The different components of Seafile project are released under different licenses: As the system admin, you can enter the admin panel by click Backup and recovery: Recover corrupt files after server hard shutdown or system crash: You can run Seafile GC to remove unused files: When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries. Since version 11.0, if you need to change a user's external ID, you can manually modify database table Administrator can reset password for a user in \"System Admin\" page. In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email. You may run Tip Enter into the docker image, then go to Under the seafile-server-latest directory, run In the Pro Edition, Seafile offers four audit logs in system admin panel: The logging feature is turned off by default before version 6.0. Add the following option to The audit log data is being saved in There are generally two parts of data to backup There are 3 databases: The backup is a two step procedure: The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data. We assume your seafile data directory is in It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week. MySQL Assume your database names are The data files are all stored in the To directly copy the whole data directory, This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed. If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup. This command backup the data directory to Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance: Now with the latest valid database backup files at hand, you can restore them. MySQL We assume your seafile volumns path is in The data files to be backed up: Use the following command to clear expired session records in Seahub database: Tip Enter into the docker image, then go to Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago: You can also clean these tables manually if you like as following. Use the following command to clear the activity records: Use the following command to clean the login records: Use the following command to clean the file access records: Use the following command to clean the file update records: Use the following command to clean the permission change audit records: Use the following command to clean the file history records: Since version 6.2, we offer command to clear outdated library records in Seafile database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time. This command has been improved in version 10.0, including: It will clear the invalid data in small batch, avoiding consume too much database resource in a short time. Dry-run mode: if you just want to see how much invalid data can be deleted without actually deleting any data, you can use the dry-run option, e.g. There are two tables in Seafile db that are related to library sync tokens. When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query: xxxx is the UNIX timestamp for the time before which tokens will be deleted. To be safe, you can first check how many tokens will be removed: Since version 7.0.8 pro, Seafile provides commands to export reports via command line. Tip Enter into the docker image, then go to On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git). With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible. Warning If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt. We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments: Tip Enter into the docker image, then go to There are three modes of operation for seaf-fsck: Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries. If you want to check integrity for specific libraries, just append the library id's as arguments: The output looks like: The corrupted files and directories are reported. Sometimes you can see output like the following: This means the \"head commit\" (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state. Tip If you have many libraries, it's helpful to save the fsck output into a log file for later analysis. Corruption repair in seaf-fsck basically works in two steps: Running the following command repairs all the libraries: Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command: After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths. To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files. When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways: Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted. In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process. To skip checking file contents, add the \"--shallow\" or \"-s\" option to seaf-fsck. You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system. The command syntax is The argument Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped. Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server. To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server. The GC program cleans up two types of unused blocks: To see how much garbage can be collected without actually removing any garbage, use the dry-run option: The output should look like: If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked. repos have blocks to be removed Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program. To actually remove garbage blocks, run without the --dry-run option: If libraries ids are specified, only those libraries will be checked for garbage. As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature: Success Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected. Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option: Bug reports This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug. You can specify the thread number in GC. By default, You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries: Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output. Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel. A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix. Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0). Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents. There are a few limitation about this feature: The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0. When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above). The encryption procedure is: The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server. When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by PBKDF2 algorithm with 1000 iterations of SHA256 hash. For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server. When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore. User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is The record is divided into 4 parts by the $ sign. To calculate the hash: Starting from version 6.0, we added Two-Factor Authentication to enhance account security. There are two ways to enable this feature: System admin can tick the check-box at the \"Password\" section of the system settings page, or just add the following settings to
"},{"location":"#contact-information","title":"Contact information","text":"
"},{"location":"changelog/","title":"Changelog","text":""},{"location":"changelog/#changelogs","title":"Changelogs","text":"
"},{"location":"administration/","title":"Administration","text":""},{"location":"administration/#enter-the-admin-panel","title":"Enter the admin panel","text":"System Admin
in the popup of avatar.
"},{"location":"administration/#logs","title":"Logs","text":"
"},{"location":"administration/#backup-and-recovery","title":"Backup and Recovery","text":"
"},{"location":"administration/#clean-database","title":"Clean database","text":"
"},{"location":"administration/#export-report","title":"Export report","text":"
"},{"location":"administration/account/","title":"Account Management","text":""},{"location":"administration/account/#user-management","title":"User Management","text":"social_auth_usersocialauth
to map the new external ID to internal ID.reset-admin.sh
script under seafile-server-latest directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account./opt/seafile/seafile-server-latest
./seahub.sh python-env python seahub/manage.py check_user_quota
, when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.
seafevents.conf
to turn it on:[Audit]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n
seahub_db
.
"},{"location":"administration/backup_recovery/#backup-steps","title":"Backup steps","text":"
"},{"location":"administration/backup_recovery/#backup-order-database-first-or-data-directory-first","title":"Backup Order: Database First or Data Directory First","text":"
/opt/seafile
for binary package based deployment (or /opt/seafile-data
for docker based deployment). And you want to backup to /backup
directory. The /backup
can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup
directory:
"},{"location":"administration/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"administration/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"/backup\n---- databases/ contains database backup files\n---- data/ contains backups of the data directory\n
ccnet_db
, seafile_db
and seahub_db
. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"mysqldump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n
/opt/seafile
directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup. cp -R /opt/seafile /backup/data/seafile-`date +\"%Y-%m-%d-%H-%M-%S\"`\n
rsync -az /opt/seafile /backup/data\n
/backup/data/seafile
.
"},{"location":"administration/backup_recovery/#restore-the-databases","title":"Restore the databases","text":"/backup/data/seafile
to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile
.
"},{"location":"administration/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"administration/backup_recovery/#structure","title":"Structure","text":"mysql -u[username] -p[password] ccnet_db < ccnet_db.sql.2013-10-19-16-00-05\nmysql -u[username] -p[password] seafile_db < seafile_db.sql.2013-10-19-16-00-20\nmysql -u[username] -p[password] seahub_db < seahub_db.sql.2013-10-19-16-01-05\n
/opt/seafile-data
. And you want to backup to /backup
directory.
"},{"location":"administration/backup_recovery/#backing-up-database","title":"Backing up Database","text":"/opt/seafile-data/seafile/conf # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"administration/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
"},{"location":"administration/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"administration/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"rsync -az /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#restore-the-seafile-data","title":"Restore the seafile data","text":"docker cp /backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql\ndocker cp /backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql\ndocker cp /backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql\n\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n
"},{"location":"administration/clean_database/","title":"Clean Database","text":""},{"location":"administration/clean_database/#session","title":"Session","text":"cp -R /backup/data/* /opt/seafile-data/seafile/\n
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\n
/opt/seafile/seafile-server-latest
./seahub.sh python-env python3 seahub/manage.py clean_db_records\n
"},{"location":"administration/clean_database/#login","title":"Login","text":"use seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\nDELETE FROM UserActivity WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-access","title":"File Access","text":"use seahub_db;\nDELETE FROM sysadmin_extra_userloginlog WHERE to_days(now()) - to_days(login_date) > 90;\n
"},{"location":"administration/clean_database/#file-update","title":"File Update","text":"use seahub_db;\nDELETE FROM FileAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#permisson","title":"Permisson","text":"use seahub_db;\nDELETE FROM FileUpdate WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-history","title":"File History","text":"use seahub_db;\nDELETE FROM PermAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#clean-outdated-library-data","title":"Clean outdated library data","text":"use seahub_db;\nDELETE FROM FileHistory WHERE to_days(now()) - to_days(timestamp) > 90;\n
./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data\n
"},{"location":"administration/clean_database/#clean-library-sync-tokens","title":"Clean library sync tokens","text":"./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data --dry-run=true\n
delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"administration/export_report/","title":"Export Report","text":"select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
/opt/seafile/seafile-server-latest
"},{"location":"administration/export_report/#export-user-storage-report","title":"Export User Storage Report","text":"cd seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_traffic_report --date 201906\n
"},{"location":"administration/export_report/#export-file-access-log","title":"Export File Access Log","text":"cd seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_storage_report\n
"},{"location":"administration/logs/","title":"Logs","text":""},{"location":"administration/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":"cd seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"administration/logs/#log-files-for-seafile-background-node-in-cluster-mode","title":"Log files for seafile background node in cluster mode","text":"
"},{"location":"administration/seafile_fsck/","title":"Seafile FSCK","text":"cd seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n
/opt/seafile/seafile-server-latest
"},{"location":"administration/seafile_fsck/#checking-integrity-of-libraries","title":"Checking Integrity of Libraries","text":"./seaf-fsck.sh\n
./seaf-fsck.sh [library-id1] [library-id2] ...\n
[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n
[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n
./seaf-fsck.sh --repair\n
./seaf-fsck.sh --repair [library-id1] [library-id2] ...\n
"},{"location":"administration/seafile_fsck/#speeding-up-fsck-by-not-checking-file-contents","title":"Speeding up FSCK by not checking file contents","text":"./seaf-fsck.sh --export top_export_path [library-id1] [library-id2] ...\n
top_export_path
is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.
"},{"location":"administration/seafile_gc/#run-gc","title":"Run GC","text":""},{"location":"administration/seafile_gc/#dry-run-mode","title":"Dry-run Mode","text":"seaf-gc.sh --dry-run [repo-id1] [repo-id2] ...\n
[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\n
seaf-gc.sh [repo-id1] [repo-id2] ...\n
seaf-gc.sh -r\n
seaf-gc.sh --rm-fs\n
seaf-gc.sh -t 20\n
"},{"location":"administration/security_features/","title":"Security Questions","text":""},{"location":"administration/security_features/#how-is-the-connection-between-client-and-server-encrypted","title":"How is the connection between client and server encrypted?","text":"seaf-gc.sh --id-prefix a123\n
PBKDF2SHA256$iterations$salt$hash\n
"},{"location":"administration/two_factor_authentication/","title":"Two-Factor Authentication","text":"PBKDF2(password, salt, iterations)
. The number of iterations is currently 10000.
seahub_settings.py
and restart service. ENABLE_TWO_FACTOR_AUTH = True\nTWO_FACTOR_DEVICE_REMEMBER_DAYS = 30 # optional, default 90 days.\n
After that, there will be a \"Two-Factor Authentication\" section in the user profile page.
Users can use the Google Authenticator app on their smart-phone to scan the QR code.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"Note: Two new options are added in version 4.4, both are in seahub_settings.py
This version contains no database table change.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#449-20160229","title":"4.4.9 (2016.02.29)","text":"LDAP improvements and fixes
New features:
Pro only:
Fixes:
Note: this version contains no database table change from v4.2. But the old search index will be deleted and regenerated.
Note when upgrading from v4.2 and using cluster, a new option COMPRESS_CACHE_BACKEND = 'locmem://'
should be added to seahub_settings.py
About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#430-20150725","title":"4.3.0 (2015.07.25)","text":"Usability improvements
Pro only features:
Others
THUMBNAIL_DEFAULT_SIZE = 24
, instead of THUMBNAIL_DEFAULT_SIZE = '24'
Note: because Seafile has changed the way how office preview work in version 4.2.2, you need to clean the old generated files using the command:
rm -rf /tmp/seafile-office-output/html/\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#424-20150708","title":"4.2.4 (2015.07.08)","text":"In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#421-20150630","title":"4.2.1 (2015.06.30)","text":"Improved account management
Important
New features
Others
Pro only updates
Usability
Security Improvement
Platform
Pro only updates
Updates in community edition too
Important
Small
Pro edition only:
Syncing
Platform
Web
Web
Platform
Web
Platform
Misc
WebDAV
pro.py search --clear
commandPlatform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/changelog-for-seafile-professional-server/#120","title":"12.0","text":"Upgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#1204-beta-to-be-told","title":"12.0.4 beta (to-be-told)","text":".env
file.ccnet.conf
is removed. Some of its configuration items are moved from .env
file, others are read from items in seafile.conf
with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#11016-2024-11-04","title":"11.0.16 (2024-11-04)","text":"Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
SDoc editor 0.6
Major changes
UI Improvements
Pro edition only changes
Other changes
Upgrade
Please check our document for how to upgrade to 10.0.
Note
If you upgrade to version 10.0.18+ from 10.0.16 or below, you need to upgrade the sqlalchemy to version 1.4.44+ if you use binary based installation. Otherwise \"activities\" page will not work.
"},{"location":"changelog/changelog-for-seafile-professional-server/#10018-2024-11-01","title":"10.0.18 (2024-11-01)","text":"This release is for Docker image only
Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10012-2024-01-16","title":"10.0.12 (2024-01-16)","text":"Upgrade
Please check our document for how to upgrade to 9.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#9016-2023-03-22","title":"9.0.16 (2023-03-22)","text":"Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml
to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#901","title":"9.0.1","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#900","title":"9.0.0","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#80","title":"8.0","text":"Upgrade
Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#8017-20220110","title":"8.0.17 (2022/01/10)","text":"Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout
configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#802-20210421","title":"8.0.2 (2021/04/21)","text":"Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/changelog-for-seafile-professional-server/#7122-20210729","title":"7.1.22 (2021/07/29)","text":"Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout
configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#7115-20210318","title":"7.1.15 (2021/03/18)","text":"Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version.
"},{"location":"changelog/changelog-for-seafile-professional-server/#7019-20200907","title":"7.0.19 (2020/09/07)","text":"-Xms1g -Xmx1g
In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf
instead of running ./seahub.sh start <another-port>
.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n
Note, this command should be run while Seafile server is running.
Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3
Version 6.3 add a new option for file search (seafevents.conf
):
[INDEX FILES]\n...\nhighlight = fvh\n...\n
This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option.
"},{"location":"changelog/changelog-for-seafile-professional-server/#6314-20190521","title":"6.3.14 (2019/05/21)","text":"New features
From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start
instead of ./seahub.sh start-fastcgi
The configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6213-2018518","title":"6.2.13 (2018.5.18)","text":"file already exists
error for the first time.per_page
parameter to 10 when search file via api.repo_owner
field to library search web api.ENABLE_REPO_SNAPSHOT_LABEL = True
to turn the feature on)You can follow the document on minor upgrade.
"},{"location":"changelog/changelog-for-seafile-professional-server/#619-20170928","title":"6.1.9 \uff082017.09.28\uff09","text":"Web UI Improvement:
Improvement for admins:
System changes:
ENABLE_WIKI = True
in seahub_settings.py)You can follow the document on minor upgrade.
Special note for upgrading a cluster:
In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
"},{"location":"changelog/changelog-for-seafile-professional-server/#6013-20170508","title":"6.0.13 (2017.05.08)","text":"Improvement for admin
Other
# -*- coding: utf-8 -*-
to seahub_settings.py, so that admin can use non-ascii characters in the file.[Audit]
and [AUDIT]
in seafevent.confPro only features
._
cloud file browser
others
This version has a few bugs. We will fix it soon.
"},{"location":"changelog/client-changelog/#601-20161207","title":"6.0.1 (2016/12/07)","text":"Note: Seafile client now support HiDPI under Windows, you should remove QT_DEVICE_PIXEL_RATIO settings if you had set one previous.
In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly.
"},{"location":"changelog/client-changelog/#406-20150109","title":"4.0.6 (2015/01/09)","text":"In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected.
"},{"location":"changelog/client-changelog/#403-20141203","title":"4.0.3 (2014/12/03)","text":"You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again.
"},{"location":"changelog/client-changelog/#3110-20141113","title":"3.1.10 (2014/11/13)","text":"Note: This version contains a bug that you can't login into your private servers.
1.8.1
1.8.0
1.7.3
1.7.2
1.7.1
1.7.0
1.6.2
1.6.1
1.6.0
1.5.3
1.5.2
1.5.1
1.5.0
S:
because a few programs will automatically try to create files in S:
Note when upgrade to 5.0 from 4.4
You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) (url might deprecated)
In Seafile 5.0, we have moved all config files to folder conf
, including:
If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4.
The 5.0 server is compatible with v4.4 and v4.3 desktop clients.
Common issues (solved) when upgrading to v5.0:
Improve seaf-fsck
Sharing link
[[ Pagename]]
.UI changes:
Config changes:
conf
Trash:
Admin:
Security:
New features:
Fixes:
Usability Improvement
Others
THUMBNAIL_DEFAULT_SIZE = 24
, instead of THUMBNAIL_DEFAULT_SIZE = '24'
Note when upgrade to 4.2 from 4.1:
If you deploy Seafile in a non-root domain, you need to add the following extra settings in seahub_settings.py:
COMPRESS_URL = MEDIA_URL\nSTATIC_URL = MEDIA_URL + '/assets/'\n
"},{"location":"changelog/server-changelog-old/#423-20150618","title":"4.2.3 (2015.06.18)","text":"Usability
Security Improvement
Platform
Important
Small
Important
Small improvements
Syncing
Platform
Web
Web
Platform
Web
Platform
Platform
Web
WebDAV
<a>, <table>, <img>
and a few other html elements in markdown to avoid XSS attack. Platform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
Web
Daemon
Web
Daemon
Web
For Admin
API
Seafile Web
Seafile Daemon
API
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/server-changelog/#120","title":"12.0","text":"Upgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/server-changelog/#1204-beta-2024-11-21","title":"12.0.4 beta (2024-11-21)","text":".env
file.ccnet.conf
is removed. Some of its configuration items are moved from .env
file, others are read from items in seafile.conf
with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/server-changelog/#11012-2024-08-14","title":"11.0.12 (2024-08-14)","text":"Seafile
Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
Seafile
SDoc editor 0.6
Seafile
Seafile
SDoc editor 0.5
Seafile
SDoc editor 0.4
Seafile
SDoc editor 0.3
Seafile
SDoc editor 0.2
Upgrade
Please check our document for how to upgrade to 10.0.
"},{"location":"changelog/server-changelog/#1001-2023-04-11","title":"10.0.1 (2023-04-11)","text":"/accounts/login
redirect by ?next=
parameterNote: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml
to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/server-changelog/#80","title":"8.0","text":"Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/server-changelog/#808-20211206","title":"8.0.8 (2021/12/06)","text":"Feature changes
Progresql support is dropped as we have rewritten the database access code to remove copyright issue.
Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/server-changelog/#715-20200922","title":"7.1.5 (2020/09/22)","text":"Feature changes
In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis.
Public Wikis are now renamed to published libraries.
Upgrade
Just follow our document on major version upgrade. No special steps are needed.
"},{"location":"changelog/server-changelog/#705-20190923","title":"7.0.5 (2019/09/23)","text":"In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf
instead of running ./seahub.sh start <another-port>
.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n
Note, this command should be run while Seafile server is running.
"},{"location":"changelog/server-changelog/#634-20180915","title":"6.3.4 (2018/09/15)","text":"From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start
instead of ./seahub.sh start-fastcgi
The configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/server-changelog/#625-20180123","title":"6.2.5 (2018/01/23)","text":"ENABLE_REPO_SNAPSHOT_LABEL = True
to turn the feature on)If you upgrade from 6.0 and you'd like to use the feature video thumbnail, you need to install ffmpeg package:
# for ubuntu 16.04\napt-get install ffmpeg\npip install pillow moviepy\n\n# for Centos 7\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\nyum -y install ffmpeg ffmpeg-devel\npip install pillow moviepy\n
"},{"location":"changelog/server-changelog/#612-20170815","title":"6.1.2 (2017.08.15)","text":"Web UI Improvement:
Improvement for admins:
System changes:
Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem.
"},{"location":"changelog/server-changelog/#609-20170330","title":"6.0.9 (2017.03.30)","text":"Improvement for admin
# -*- coding: utf-8 -*-
to seahub_settings.py, so that admin can use non-ascii characters in the file.Other
Warning:
Note: when upgrade from 5.1.3 or lower version to 5.1.4+, you need to install python-urllib3 (or python2-urllib3 for Arch Linux) manually:
# for Ubuntu\nsudo apt-get install python-urllib3\n# for CentOS\nsudo yum install python-urllib3\n
"},{"location":"changelog/server-changelog/#514-20160723","title":"5.1.4 (2016.07.23)","text":"Note: downloading multiple files at once will be added in the next release.
Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released.
The config files used in Seafile include:
You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files.
"},{"location":"config/#the-design-of-configure-options","title":"The design of configure options","text":"There are now three places you can config Seafile server:
The web interface has the highest priority. It contains a subset of end-user oriented settings. In practise, you can disable settings via web interface for simplicity.
Environment variables contains system level settings that needed when initialize Seafile server or run Seafile server. Environment variables also have three categories:
The variables in the first category can be deleted after initialization. In the future, we will make more components to read config from environment variables, so that the third category is no longer needed.
"},{"location":"config/admin_roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for administrators. Seafile has four build-in admin roles:
default_admin, has all permissions.
system_admin, can only view system info and config system.
daily_admin, can only view system info, view statistic, manage library/user/group, view user log.
audit_admin, can only view system info and admin log.
All administrators will have default_admin
role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True
.
Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to seahub_settings.py
.
ENABLED_ADMIN_ROLE_PERMISSIONS = {\n 'system_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n },\n 'daily_admin': {\n 'can_view_system_info': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n },\n 'audit_admin': {\n 'can_view_system_info': True,\n 'can_view_admin_log': True,\n },\n 'custom_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n 'can_view_admin_log': True,\n },\n}\n
"},{"location":"config/auth_switch/","title":"Switch authentication type","text":"Seafile Server supports the following external authentication types:
Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases.
Note
Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong!
See more about make a database backup.
"},{"location":"config/auth_switch/#migrating-from-local-user-database-to-external-authentication","title":"Migrating from local user database to external authentication","text":"As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple.
"},{"location":"config/auth_switch/#general-procedure","title":"General procedure","text":"Configure and test the desired external authentication. Note the name of the provider
you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid
from the social_auth_usersocialauth
table. Delete this new, still empty user again.
Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like xxx@auth.local
.
Replace the password hash with an exclamation mark.
Create a new entry in social_auth_usersocialauth
with the xxx@auth.local
, your provider
and the uid
.
The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries.
"},{"location":"config/auth_switch/#example","title":"Example","text":"This example shows how to migrate the user with the username 12ae56789f1e4c8d8e1c31415867317c@auth.local
from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py
with the provider name authentik-oauth
. The uid
of the user inside the Identity Provider is HR12345
.
This is what the database looks like before these commands must be executed:
mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\n
Note
The extra_data
field store user's information returned from the provider. For most providers, the extra_data
field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data
field is NULL
.
Afterwards the databases should look like this:
mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\n
"},{"location":"config/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the social_auth_usersocialauth
table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local
remains the same, you only need to replace the provider
and the uid
.
First, delete the entry in the social_auth_usersocialauth
table that belongs to the particular user.
Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done.
More details about this option will follow soon.
"},{"location":"config/auto_login_seadrive/","title":"Auto Login to SeaDrive on Windows","text":"Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage.
"},{"location":"config/auto_login_seadrive/#technical-details","title":"Technical Details","text":"The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser.
When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism.
The details of Integrated Windows Authentication is described below:
In short:
The Internet Options has to be configured as following:
Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone.
Note
Above configuration requires a reboot to take effect.
Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned.
Note
The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos.
"},{"location":"config/auto_login_seadrive/#auto-login-on-seadrive","title":"Auto Login on SeaDrive","text":"SeaDrive will use the Kerberos login configuration from the Windows Registry under HKEY_CURRENT_USER/SOFTWARE/SeaDrive
.
Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\n
The system wide configuration path is located at HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive
.
SeaDrive can be installed silently with the following command (requires admin privileges):
msiexec /i seadrive.msi /quiet /qn /log install.log\n
"},{"location":"config/auto_login_seadrive/#auto-login-via-group-policy","title":"Auto Login via Group Policy","text":"The configuration of Internet Options : https://docs.microsoft.com/en-us/troubleshoot/browsers/how-to-configure-group-policy-preference-settings
The configuration of Windows Registry : https://thesolving.com/server-room/how-to-deploy-a-registry-key-via-group-policy/
"},{"location":"config/ccnet-conf/","title":"ccnet.conf","text":"Ccnet is the internal RPC framework used by Seafile server and also manages the user database. A few useful options are in ccnet.conf.
ccnet.conf
is removed in version 12.0
Due to ccnet.conf
is removed in version 12.0, the following informaiton is read from .env
file
SEAFILE_MYSQL_DB_USER: The database user, the default is seafile\nSEAFILE_MYSQL_DB_PASSWORD: The database password\nSEAFILE_MYSQL_DB_HOST: The database host\nSEAFILE_MYSQL_DB_CCNET_DB_NAME: The database name for ccnet db, the default is ccnet_db\n
"},{"location":"config/ccnet-conf/#changing-mysql-connection-pool-size","title":"Changing MySQL Connection Pool Size","text":"In version 12.0, the following information is read from the same option in seafile.conf
When you configure ccnet to use MySQL, the default connection pool size is 100, which should be enough for most use cases. You can change this value by adding following options to ccnet.conf:
[Database]\n......\n# Use larger connection pool\nMAX_CONNECTIONS = 200\n
"},{"location":"config/ccnet-conf/#using-encrypted-connections","title":"Using Encrypted Connections","text":"In version 12.0, the following information is read from the same option in seafile.conf
Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options:
[Database]\nUSE_SSL = true\nSKIP_VERIFY = false\nCA_PATH = /etc/mysql/ca.pem\n
When set use_ssl
to true and skip_verify
to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path
. The ca_path
is a trusted CA certificate path for signing MySQL server certificates. When skip_verify
is true, there is no need to add the ca_path
option. The MySQL server certificate won't be verified at this time.
To use ADFS to log in to your Seafile, you need the following components:
A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
You can generate them by:
``` openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n
sudo apt install xmlsec1 sudo pip install cryptography djangosaml2==0.15.0 ### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n
from os import path import saml2 import saml2.saml"},{"location":"config/config_seafile_with_ADFS/#update-following-lines-according-to-your-situation","title":"update following lines according to your situation","text":"CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"config/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":"
ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY,
'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n
}
```
"},{"location":"config/config_seafile_with_ADFS/#config-adfs-server","title":"Config ADFS Server","text":"Relying Party Trust is the connection between Seafile and ADFS.
Log into the ADFS server and open the ADFS management.
Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026.
Select Import data about the relying party published online or one a local network, input https://demo.seafile.com/saml2/metadata/
in the Federation metadata address.
Then Next until Finish.
Add Relying Party Claim Rules
Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain.
Important: Users in Windows domain must have the E-mail value setted.
Right-click on the relying party trust and select Edit Claim Rules...
On the Issuance Transform Rules tab select Add Rules...
Select Send LDAP Attribute as Claims as the claim rule template to use.
Give the claim a name such as LDAP Attributes.
Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address.
Select Finish.
Click Add Rule... again.
Select Transform an Incoming Claim.
Give it a name such as Email to Name ID.
Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1).
The Outgoing claim type is Name ID (this is required in Seafile settings policy 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS
).
the Outgoing name ID format is Email.
Pass through all claim values and click Finish.
https://support.zendesk.com/hc/en-us/articles/203663886-Setting-up-single-sign-on-using-Active-Directory-with-ADFS-and-SAML-Plus-and-Enterprise-
http://wiki.servicenow.com/?title=Configuring_ADFS_2.0_to_Communicate_with_SAML_2.0#gsc.tab=0
https://github.com/rohe/pysaml2/blob/master/src/saml2/saml.py
Note: Subject line may vary between different releases, this is based on Release 2.0.1. Restart Seahub so that your changes take effect.
"},{"location":"config/customize_email_notifications/#user-reset-hisher-password","title":"User reset his/her password","text":"Subject
seahub/seahub/auth/forms.py line:103
Body
seahub/seahub/templates/registration/password_reset_email.html
Note: You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
Body
seahub/seahub/templates/sysadmin/user_add_email.html
Note: You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:368
Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Note: You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:668
Body
seahub/seahub/templates/shared_link_email.html
"},{"location":"config/details_about_file_search/","title":"Details about File Search","text":""},{"location":"config/details_about_file_search/#search-options","title":"Search Options","text":"The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect.
[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
"},{"location":"config/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below.
"},{"location":"config/details_about_file_search/#modify-seafeventsconf","title":"Modifyseafevents.conf
","text":"Deploy in DockerDeploy from binary packages cd /opt/seafile-data/seafile/conf\nnano seafevents.conf\n
cd /opt/seafile/conf\nnano seafevents.conf\n
set index_office_pdf
to true
...\n[INDEX FILES]\n...\nindex_office_pdf=true\n...\n
"},{"location":"config/details_about_file_search/#restart-seafile-server","title":"Restart Seafile server","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
"},{"location":"config/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"config/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":"You can rebuild search index by running:
Deploy in DockerDeploy from binary packagesdocker exec -it seafile bash\ncd /scripts\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
Tip
If this does not work, you can try the following steps:
rm -rf pro-data/search
./pro/pro.py search --update
Create an elasticsearch service on AWS according to the documentation.
Configure the seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
Note
The version of the Python third-party package elasticsearch
cannot be greater than 7.14.0, otherwise the elasticsearch service cannot be accessed: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/samplecode.html#client-compatibility, https://github.com/elastic/elasticsearch-py/pull/1623.
The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search.
To be able to search immediately,
docker exec -it seafile bash\ncd /scripts\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --update\n
"},{"location":"config/details_about_file_search/#encrypted-files-cannot-be-searched","title":"Encrypted files cannot be searched","text":"This is because the server cannot index encrypted files, since they are encrypted.
"},{"location":"config/env/","title":".env","text":"The .env
file will be used to specify the components used by the Seafile-docker instance and the environment variables required by each component. The default contents list in below
COMPOSE_FILE='seafile-server.yml,caddy.yml'\nCOMPOSE_PATH_SEPARATOR=','\n\n\nSEAFILE_IMAGE=seafileltd/seafile-pro-mc:12.0-latest\nSEAFILE_DB_IMAGE=mariadb:10.11\nSEAFILE_MEMCACHED_IMAGE=memcached:1.6.29\nSEAFILE_ELASTICSEARCH_IMAGE=elasticsearch:8.15.0 # pro edition only\nSEAFILE_CADDY_IMAGE=lucaslorentz/caddy-docker-proxy:2.9\n\nSEAFILE_VOLUME=/opt/seafile-data\nSEAFILE_MYSQL_VOLUME=/opt/seafile-mysql/db\nSEAFILE_ELASTICSEARCH_VOLUME=/opt/seafile-elasticsearch/data # pro edition only\nSEAFILE_CADDY_VOLUME=/opt/seafile-caddy\n\nSEAFILE_MYSQL_DB_HOST=db\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n\nTIME_ZONE=Etc/UTC\n\nJWT_PRIVATE_KEY=\n\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_SERVER_PROTOCOL=https\n\nINIT_SEAFILE_ADMIN_EMAIL=me@example.com\nINIT_SEAFILE_ADMIN_PASSWORD=asecret\nINIT_S3_STORAGE_BACKEND_CONFIG=false # pro edition only\nINIT_S3_COMMIT_BUCKET=<your-commit-objects> # pro edition only\nINIT_S3_FS_BUCKET=<your-fs-objects> # pro edition only\nINIT_S3_BLOCK_BUCKET=<your-block-objects> # pro edition only\nINIT_S3_KEY_ID=<your-key-id> # pro edition only\nINIT_S3_SECRET_KEY=<your-secret-key> # pro edition only\n\nCLUSTER_INIT_MODE=true # cluster only\nCLUSTER_INIT_MEMCACHED_HOST=<your memcached host> # cluster only\nCLUSTER_INIT_ES_HOST=<your elasticsearch server HOST> # cluster only\nCLUSTER_INIT_ES_PORT=9200 # cluster only\nCLUSTER_MODE=frontend # cluster only\n\n\nSEADOC_IMAGE=seafileltd/sdoc-server:1.0-latest\nSEADOC_VOLUME=/opt/seadoc-data\n\nENABLE_SEADOC=false\nSEADOC_SERVER_URL=http://seafile.example.com/sdoc-server\n\n\nNOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:12.0-latest\nNOTIFICATION_SERVER_VOLUME=/opt/notification-data\n
"},{"location":"config/env/#seafile-docker-configurations","title":"Seafile-docker configurations","text":""},{"location":"config/env/#components-configurations","title":"Components configurations","text":"COMPOSE_FILE
: .yml
files for components of Seafile-docker, each .yml
must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR
. The core components are involved in seafile-server.yml
and caddy.yml
which must be taken in this term.COMPOSE_PATH_SEPARATOR
: The symbol used to separate the .yml
files in term COMPOSE_FILE
, default is ','.SEAFILE_IMAGE
: The image of Seafile-server, default is seafileltd/seafile-pro-mc:12.0-latest
.SEAFILE_DB_IMAGE
: Database server image, default is mariadb:10.11
.SEAFILE_MEMCACHED_IMAGE
: Cached server image, default is memcached:1.6.29
SEAFILE_ELASTICSEARCH_IMAGE
: Only valid in pro edition. The elasticsearch image, default is elasticsearch:8.15.0
.SEAFILE_CADDY_IMAGE
: Caddy server image, default is lucaslorentz/caddy-docker-proxy:2.9
.SEADOC_IMAGE
: Only valid after integrating SeaDoc. SeaDoc server image, default is seafileltd/sdoc-server:1.0-latest
.SEAFILE_VOLUME
: The volume directory of Seafile data, default is /opt/seafile-data
.SEAFILE_MYSQL_VOLUME
: The volume directory of MySQL data, default is /opt/seafile-mysql/db
.SEAFILE_CADDY_VOLUME
: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddy
.SEAFILE_ELASTICSEARCH_VOLUME
: Only valid in pro edition. The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/data
.SEADOC_VOLUME
: Only valid after integrating SeaDoc. The volume directory of SeaDoc server data, default is /opt/seadoc-data
.SEAFILE_MYSQL_DB_HOST
: The host address of Mysql, default is the pre-defined service name db
in Seafile-docker instance.INIT_SEAFILE_MYSQL_ROOT_PASSWORD
: (Only required on first deployment) The root
password of MySQL. SEAFILE_MYSQL_DB_USER
: The user of MySQL (database
- user
can be found in conf/seafile.conf
).SEAFILE_MYSQL_DB_PASSWORD
: The user seafile
password of MySQL.SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
: The name of Seafile database name, default is seafile_db
SEAFILE_MYSQL_DB_CCNET_DB_NAME
: The name of ccnet database name, default is ccnet_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
: The name of seahub database name, default is seahub_db
SEAFILE_MYSQL_DB_PASSWORD
: The user seafile
password of MySQLJWT
: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1
SEAFILE_SERVER_HOSTNAME
: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL
: Seafile server protocol (http or https)TIME_ZONE
: Time zone (default UTC)INIT_SEAFILE_ADMIN_EMAIL
: Admin usernameINIT_SEAFILE_ADMIN_PASSWORD
: Admin passwordENABLE_SEADOC
: Enable the SeaDoc server or not, default is false
.SEADOC_SERVER_URL
: Only valid in ENABLE_SEADOC=true
. Url of Seadoc server (e.g., http://seafile.example.com/sdoc-server).CLUSTER_INIT_MODE
: (only valid in pro edition at deploying first time). Cluster initialization mode, in which the necessary configuration files for the service to run will be generated (but the service will not be started). If the configuration file already exists, no operation will be performed. The default value is true
. When the configuration file is generated, be sure to set this item to false
.CLUSTER_INIT_MEMCACHED_HOST
: (only valid in pro edition at deploying first time). Cluster Memcached host. (If your Memcached server dose not use port 11211
, please modify the seahub_settings.py and seafile.conf).CLUSTER_INIT_ES_HOST
: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server host.CLUSTER_INIT_ES_PORT
: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server port. Default is 9200
.CLUSTER_MODE
: Seafile service node type, i.e., frontend
(default) or backend
INIT_S3_STORAGE_BACKEND_CONFIG
: Whether to configure S3 storage backend synchronously during initialization (i.e., the following features in this section, for more details, please refer to AWS S3), default is false
.INIT_S3_COMMIT_BUCKET
: S3 storage backend fs objects bucketINIT_S3_FS_BUCKET
: S3 storage backend block objects bucketINIT_S3_BLOCK_BUCKET
: S3 storage backend block objects bucketINIT_S3_KEY_ID
: S3 storage backend key IDINIT_S3_SECRET_KEY
: S3 storage backend secret keyINIT_S3_USE_V4_SIGNATURE
: Use the v4 protocol of S3 if enabled, default is true
INIT_S3_AWS_REGION
: Region of your buckets (AWS only), default is us-east-1
. (Only valid when INIT_S3_USE_V4_SIGNATURE
sets to true
)INIT_S3_HOST
: Host of your buckets, default is s3.us-east-1.amazonaws.com
. (Only valid when INIT_S3_USE_V4_SIGNATURE
sets to true
)INIT_S3_USE_HTTPS
: Use HTTPS connections to S3 if enabled, default is true
This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation
"},{"location":"config/ldap_in_11.0_ce/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_11.0_ce/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name
, e.g. john@example.com
. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth
to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth
table
Add the following options to seahub_settings.py
. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
Meaning of some options:
variable descriptionLDAP_SERVER_URL
The URL of LDAP server LDAP_BASE_DN
The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN
DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com
LDAP_ADMIN_PASSWORD
Password of LDAP_ADMIN_DN
LDAP_PROVIDER
Identify the source of the user, used in the table social_auth_usersocialauth
, defaults to 'ldap' LDAP_LOGIN_ATTR
User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR
LDAP user's contact_email
attribute LDAP_USER_ROLE_ATTR
LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR
Attribute for user's first name. It's \"givenName\"
by default. LDAP_USER_LAST_NAME_ATTR
Attribute for user's last name. It's \"sn\"
by default. LDAP_USER_NAME_REVERSE
In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER
Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN
and LDAP_ADMIN_DN
:
To determine the LDAP_BASE_DN
, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com
as LDAP_BASE_DN
(with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery
command on the domain controller to find out the DN for this OU. For example, if the OU is staffs
, you can run dsquery ou -name staff
. More information can be found here.
AD supports user@domain.name
format for the LDAP_ADMIN_DN
option. For example you can use administrator@example.com for LDAP_ADMIN_DN
. Sometime the domain controller doesn't recognize this format. You can still use dsquery
command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser
. More information here.
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN
option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n
"},{"location":"config/ldap_in_11.0_ce/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER
option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER))
. $LOGIN_ATTR
and $LDAP_FILTER
will be replaced by your option values.
For example, add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
Note that the case of attribute names in the above example is significant. The memberOf
attribute is only available in Active Directory.
You can use the LDAP_FILTER
option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery
command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup
.
Add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_11.0_ce/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL
as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\n
"},{"location":"config/ldap_in_11.0_pro/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"config/ldap_in_11.0_pro/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_11.0_pro/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name
, e.g. john@example.com
. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth
to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth
table
Add the following options to seahub_settings.py
. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
Meaning of some options:
variable descriptionLDAP_SERVER_URL
The URL of LDAP server LDAP_BASE_DN
The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN
DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com
LDAP_ADMIN_PASSWORD
Password of LDAP_ADMIN_DN
LDAP_PROVIDER
Identify the source of the user, used in the table social_auth_usersocialauth
, defaults to 'ldap' LDAP_LOGIN_ATTR
User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR
LDAP user's contact_email
attribute LDAP_USER_ROLE_ATTR
LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR
Attribute for user's first name. It's \"givenName\"
by default. LDAP_USER_LAST_NAME_ATTR
Attribute for user's last name. It's \"sn\"
by default. LDAP_USER_NAME_REVERSE
In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER
Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN
and LDAP_ADMIN_DN
:
To determine the LDAP_BASE_DN
, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com
as LDAP_BASE_DN
(with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery
command on the domain controller to find out the DN for this OU. For example, if the OU is staffs
, you can run dsquery ou -name staff
. More information can be found here.
AD supports user@domain.name
format for the LDAP_ADMIN_DN
option. For example you can use administrator@example.com for LDAP_ADMIN_DN
. Sometime the domain controller doesn't recognize this format. You can still use dsquery
command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser
. More information here.
In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database.
User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page.
"},{"location":"config/ldap_in_11.0_pro/#sync-configuration-items","title":"Sync configuration items","text":"Add the following options to seahub_settings.py
. Examples are as follows:
# Basic configuration items\nENABLE_LDAP = True\n......\n\n# ldap user sync options.\nLDAP_SYNC_INTERVAL = 60 \nENABLE_LDAP_USER_SYNC = True \nLDAP_USER_OBJECT_CLASS = 'person'\nLDAP_DEPT_ATTR = '' \nLDAP_UID_ATTR = '' \nLDAP_AUTO_REACTIVATE_USERS = True \nLDAP_USE_PAGED_RESULT = False \nIMPORT_NEW_USER = True \nACTIVATE_USER_WHEN_IMPORT = True \nDEACTIVE_USER_IF_NOTFOUND = False \nENABLE_EXTRA_USER_INFO_SYNC = True \n
Meaning of some options:
Variable Description LDAP_SYNC_INTERVAL The interval to sync. Unit is minutes. Defaults to 60 minutes. ENABLE_LDAP_USER_SYNC set to \"true\" if you want to enable ldap user synchronization LDAP_USER_OBJECT_CLASS This is the name of the class used to search for user objects. In Active Directory, it's usually \"person\". The default value is \"person\". LDAP_DEPT_ATTR Attribute for department info. LDAP_UID_ATTR Attribute for Windows login name. If this is synchronized, users can also log in with their Windows login name. In AD, the attributesAMAccountName
can be used as UID_ATTR
. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table). LDAP_AUTO_REACTIVATE_USERS Whether to auto activate deactivated user, default by 'true' LDAP_USE_PAGED_RESULT Whether to use pagination extension. It is useful when you have more than 1000 users in LDAP server. IMPORT_NEW_USER Whether to import new users when sync user. ACTIVE_USER_WHEN_IMPORT Whether to activate the user automatically when imported. DEACTIVE_USER_IF_NOTFOUND set to \"true\" if you want to deactivate a user when he/she was deleted in AD server. ENABLE_EXTRA_USER_INFO_SYNC Enable synchronization of additional user information, including user's full name, department, and Windows login name, etc."},{"location":"config/ldap_in_11.0_pro/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable.
Seafile provides a combination of options for such use case. You can modify below option in seahub_settings.py
:
ACTIVATE_USER_WHEN_IMPORT = False\n
This prevents Seafile from activating imported users. Then, add below option to seahub_settings.py
:
ACTIVATE_AFTER_FIRST_LOGIN = True\n
This option will automatically activate users when they login to Seafile for the first time.
"},{"location":"config/ldap_in_11.0_pro/#reactivating-users","title":"Reactivating Users","text":"When you set the DEACTIVE_USER_IF_NOTFOUND
option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.
However, sometimes it's desirable to auto reactivate such users. You can modify below option in seahub_settings.py
:
LDAP_AUTO_REACTIVATE_USERS = True\n
"},{"location":"config/ldap_in_11.0_pro/#manually-trigger-synchronization","title":"Manually Trigger Synchronization","text":"To test your LDAP sync configuration, you can run the sync command manually.
To trigger LDAP sync manually:
cd seafile-server-latest\n./pro/pro.py ldapsync\n
For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n
"},{"location":"config/ldap_in_11.0_pro/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"config/ldap_in_11.0_pro/#how-it-works","title":"How It Works","text":"The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way.
Any changes to groups in the database won't propagate back to LDAP;
Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server.
The creator of imported groups will be set to the system admin.
There are two modes of operation:
Periodical: the syncing process will be executed in a fixed interval
Manual: there is a script you can run to trigger the syncing once
Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details.
The following are LDAP group sync related options:
# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n
Meaning of some options:
variable description ENABLE_LDAP_GROUP_SYNC Whether to enable group sync. LDAP_GROUP_OBJECT_CLASS This is the name of the class used to search for group objects. LDAP_GROUP_MEMBER_ATTR The attribute field to use when loading the group's members. For most directory servers, the attribute is \"member\" which is the default value. For \"posixGroup\", it should be set to \"memberUid\". LDAP_USER_ATTR_IN_MEMBERUID The user attribute set in 'memberUid' option, which is used in \"posixGroup\". The default value is \"uid\". LDAP_GROUP_UUID_ATTR Used to uniquely identify groups in LDAP. LDAP_GROUP_FILTER An additional filter to use when searching group objects. If it's set, the final filter used to run search is(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))
; otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS)
. LDAP_USER_GROUP_MEMBER_RANGE_QUERY When a group contains too many members, AD will only return part of them. Set this option to TRUE to make LDAP sync work with large groups. DEL_GROUP_IF_NOT_FOUND Set to \"true\", sync process will delete the group if not found in the LDAP server. LDAP_SYNC_GROUP_AS_DEPARTMENT Whether to sync groups as top-level departments in Seafile. Learn more about departments in Seafile here. LDAP_DEPT_NAME_ATTR Used to get the department name. Tip
The search base for groups is the option LDAP_BASE_DN
.
Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A.
In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set LDAP_GROUP_OBJECT_CLASS
option to posixGroup
. A posixGroup
object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR
option. It's MemberUid
by default. The value of the MemberUid
attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid
, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID
option. Note that posixGroup
doesn't support nested groups.
A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments:
Department supports hierarchy. A department can have any levels of sub-departments.
Department can have storage quota.
Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs.
Options for syncing departments from OU:
LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
"},{"location":"config/ldap_in_11.0_pro/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it.
After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page.
[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\n
To trigger LDAP sync manually,
cd seafile-server-latest\n./pro/pro.py ldapsync\n
For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n
"},{"location":"config/ldap_in_11.0_pro/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"config/ldap_in_11.0_pro/#multiple-base","title":"Multiple BASE","text":"Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN
option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n
"},{"location":"config/ldap_in_11.0_pro/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER
option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER))
. $LOGIN_ATTR
and $LDAP_FILTER
will be replaced by your option values.
For example, add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
The case of attribute names in the above example is significant. The memberOf
attribute is only available in Active Directory
You can use the LDAP_FILTER
option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery
command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup
.
Add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_11.0_pro/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL
as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\n
"},{"location":"config/ldap_in_11.0_pro/#use-paged-results-extension","title":"Use paged results extension","text":"LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension.
In Seafile Pro Edition, add this option to seahub_settings.py
to enable PR:
LDAP_USE_PAGED_RESULT = True\n
"},{"location":"config/ldap_in_11.0_pro/#follow-referrals","title":"Follow referrals","text":"Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article.
To configure, add below option to seahub_settings.py
, e.g.:
LDAP_FOLLOW_REFERRALS = True\n
"},{"location":"config/ldap_in_11.0_pro/#configure-multi-ldap-servers","title":"Configure Multi-ldap Servers","text":"Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile.
Currently, only two LDAP servers are supported.
If you want to use multi-ldap servers, please replace LDAP
in the options with MULTI_LDAP_1
, and then add them to seahub_settings.py
, for example:
# Basic config options\nENABLE_LDAP = True\n......\n\n# Multi ldap config options\nENABLE_MULTI_LDAP_1 = True\nMULTI_LDAP_1_SERVER_URL = 'ldap://192.168.0.2'\nMULTI_LDAP_1_BASE_DN = 'ou=test,dc=seafile,dc=top'\nMULTI_LDAP_1_ADMIN_DN = 'administrator@example.top'\nMULTI_LDAP_1_ADMIN_PASSWORD = 'Hello@123'\nMULTI_LDAP_1_PROVIDER = 'ldap1'\nMULTI_LDAP_1_LOGIN_ATTR = 'userPrincipalName'\n\n# Optional configs\nMULTI_LDAP_1_USER_FIRST_NAME_ATTR = 'givenName'\nMULTI_LDAP_1_USER_LAST_NAME_ATTR = 'sn'\nMULTI_LDAP_1_USER_NAME_REVERSE = False\nENABLE_MULTI_LDAP_1_EXTRA_USER_INFO_SYNC = True\n\nMULTI_LDAP_1_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \nMULTI_LDAP_1_USE_PAGED_RESULT = False\nMULTI_LDAP_1_FOLLOW_REFERRALS = True\nENABLE_MULTI_LDAP_1_USER_SYNC = True\nENABLE_MULTI_LDAP_1_GROUP_SYNC = True\nMULTI_LDAP_1_SYNC_DEPARTMENT_FROM_OU = True\n\nMULTI_LDAP_1_USER_OBJECT_CLASS = 'person'\nMULTI_LDAP_1_DEPT_ATTR = ''\nMULTI_LDAP_1_UID_ATTR = ''\nMULTI_LDAP_1_CONTACT_EMAIL_ATTR = ''\nMULTI_LDAP_1_USER_ROLE_ATTR = ''\nMULTI_LDAP_1_AUTO_REACTIVATE_USERS = True\n\nMULTI_LDAP_1_GROUP_OBJECT_CLASS = 'group'\nMULTI_LDAP_1_GROUP_FILTER = ''\nMULTI_LDAP_1_GROUP_MEMBER_ATTR = 'member'\nMULTI_LDAP_1_GROUP_UUID_ATTR = 'objectGUID'\nMULTI_LDAP_1_CREATE_DEPARTMENT_LIBRARY = False\nMULTI_LDAP_1_DEPT_REPO_PERM = 'rw'\nMULTI_LDAP_1_DEFAULT_DEPARTMENT_QUOTA = -2\nMULTI_LDAP_1_SYNC_GROUP_AS_DEPARTMENT = False\nMULTI_LDAP_1_USE_GROUP_MEMBER_RANGE_QUERY = False\nMULTI_LDAP_1_USER_ATTR_IN_MEMBERUID = 'uid'\nMULTI_LDAP_1_DEPT_NAME_ATTR = ''\n......\n
!!! note: There are still some shared config options are used for all LDAP servers, as follows:
```python\n# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n```\n
"},{"location":"config/ldap_in_11.0_pro/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set
SSO_LDAP_USE_SAME_UID = True\n
Here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR
(not LDAP_UID_ATTR
), in ADFS it is uid
attribute. You need make sure you use the same attribute for the two settings
On this basis, if you only want users to login using OSS and not through LDAP, you can set
USE_LDAP_SYNC_ONLY = True\n
"},{"location":"config/ldap_in_11.0_pro/#importing-roles-from-ldap","title":"Importing Roles from LDAP","text":"Seafile Pro Edition supports syncing roles from LDAP or Active Directory.
To enable this feature, add below option to seahub_settings.py
, e.g.
LDAP_USER_ROLE_ATTR = 'title'\n
LDAP_USER_ROLE_ATTR
is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py
under conf/ and edit it like:
# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n
You should only define one of the two functions
You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced.
"},{"location":"config/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution.
"},{"location":"config/multi_institutions/#turn-on-the-feature","title":"Turn on the feature","text":"In seahub_settings.py
, add MULTI_INSTITUTION = True
to enable multi-institution feature, and add
EXTRA_MIDDLEWARE += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n )\n
Please replease +=
to =
if EXTRA_MIDDLEWARE_CLASSES
or EXTRA_MIDDLEWARE
is not defined
After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose profile.institution
match the name.
If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"givenname\": (False, \"givenname\"),\n \"sn\": (False, \"surname\"),\n \"mail\": (False, \"contact_email\"),\n \"organization\": (False, \"institution\"),\n}\n
"},{"location":"config/multi_tenancy/","title":"Multi-Tenancy Support","text":"Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations.
"},{"location":"config/multi_tenancy/#seafile-config","title":"Seafile Config","text":""},{"location":"config/multi_tenancy/#seafileconf","title":"seafile.conf","text":"[general]\nmulti_tenancy = true\n
"},{"location":"config/multi_tenancy/#seahub_settingspy","title":"seahub_settings.py","text":"CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
"},{"location":"config/multi_tenancy/#usage","title":"Usage","text":"An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d.
Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned.
After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users.
"},{"location":"config/multi_tenancy/#adfssaml-single-sign-on-integration-in-multi-tenancy","title":"ADFS/SAML single sign-on integration in multi-tenancy","text":""},{"location":"config/multi_tenancy/#preparation-for-adfssaml","title":"Preparation for ADFS/SAML","text":"1) Prepare SP(Seafile) certificate directory and SP certificates:
Create sp certs dir
$ mkdir -p /opt/seafile-data/seafile/seahub-data/certs\n
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
$ cd /opt/seafile-data/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt\n
The days
option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
Note
If certificates are not placed in /opt/seafile-data/seafile/seahub-data/certs
, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n
2) Add the following configuration to seahub_settings.py and then restart Seafile:
ENABLE_MULTI_ADFS = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
"},{"location":"config/multi_tenancy/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":"Please refer to this document.
"},{"location":"config/oauth/","title":"OAuth Authentication","text":""},{"location":"config/oauth/#oauth","title":"OAuth","text":"Before using OAuth, you should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py.
"},{"location":"config/oauth/#register-an-oauth2-client-application","title":"Register an OAuth2 client application","text":"Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed.
"},{"location":"config/oauth/#configuration","title":"Configuration","text":"Add the folllowing configurations to seahub_settings.py:
ENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\n
"},{"location":"config/oauth/#more-explanations-about-the-settings","title":"More explanations about the settings","text":"OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN
OAUTH_PROVIDER_DOMAIN
will be deprecated, and it can be replaced by OAUTH_PROVIDER
. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters.
OAUTH_ATTRIBUTE_MAP
This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below:
OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n }\n
If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
The key part id
stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True
stands for if this field is mandatory by Seafile.
Since 11.0 version, Seafile use uid
as the external unique identifier of the user. It stores uid
in table social_auth_usersocialauth
and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id
or uid
or username
, etc. And the id/email config id: (True, email)
is deprecated.
If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\")
.
If you use a newly deployed 11.0+ Seafile instance, you don't need the \"id\": (True, \"email\")
item. Your configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
"},{"location":"config/oauth/#sample-settings","title":"Sample settings","text":"GoogleGithubGitLabAzure Cloud ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
Note
For Github, email
is not the unique identifier for an user, but id
is in most cases, so we use id
as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id
and OAUTH_PROVIDER_DOMAIN
, which is github.com in your case, to an email format string and then create this account if not exist.
ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\nOAUTH_PROVIDER_DOMAIN = 'github.com'\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, 'uid'),\n \"email\": (False, \"contact_email\"),\n \"name\": (False, \"name\"),\n}\n
Note
To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications).
Fill in required fields:
Name: a name you specify
Redirect URI: The callback url see below OAUTH_REDIRECT_URL
Trusted: Skip confirmation dialog page. Select this to not ask the user if he wants to authorize seafile to receive access to his/her account data.
Scopes: Select openid
and read_user
in the scopes list.
Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py
ENABLE_OAUTH = True\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = \"https://your-seafile/oauth/callback/\"\n\nOAUTH_PROVIDER_DOMAIN = 'your-domain'\nOAUTH_AUTHORIZATION_URL = 'https://gitlab.your-domain/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://gitlab.your-domain/oauth/token'\nOAUTH_USER_INFO_URL = 'https://gitlab.your-domain/api/v4/user'\nOAUTH_SCOPE = [\"openid\", \"read_user\"]\nOAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n
Note
For users of Azure Cloud, as there is no id
field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP
setting (others are the same as Github/Google). Please see this tutorial for the complete deployment process of OAuth against Azure Cloud.
OAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n
"},{"location":"config/ocm/","title":"Open Cloud Mesh","text":"From 8.0.0, Seafile supports OCM protocol. With OCM, user can share library to other server which enabled OCM too.
Seafile currently supports sharing between Seafile servers with version greater than 8.0, and sharing from NextCloud to Seafile since 9.0.
These two functions cannot be enabled at the same time
"},{"location":"config/ocm/#configuration","title":"Configuration","text":"Add the following configuration to seahub_settings.py
.
# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\n
# Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\n
OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with
"},{"location":"config/ocm/#usage","title":"Usage","text":""},{"location":"config/ocm/#share-library-to-other-server","title":"Share library to other server","text":"In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing.
"},{"location":"config/ocm/#view-be-shared-libraries","title":"View be shared libraries","text":"You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing.
And enter the library to view, download or upload files.
"},{"location":"config/remote_user/","title":"SSO using Remote User","text":"Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers.
After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information.
Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks
Please add the following settings to conf/seahub_settings.py
to enable this feature.
ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_Shibboleth-affiliation': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n
Then restart Seafile.
"},{"location":"config/roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. For most permissions, the meaning can be easily obtained from the variable name. The following is a further detailed introduction to some variables.
role_quota
is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g'
, and leave other role of users to the default quota.
can_add_public_repo
is to set whether a role can create a public library, default is False
.
Since version 11.0.9 pro, can_share_repo
is added to limit users' ability to share a library
The can_add_public_repo
option will not take effect if you configure global CLOUD_MODE = True
storage_ids
permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.
upload_rate_limit
and download_rate_limit
are added to limit upload and download speed for users with different roles.
Note
After configured the rate limit, run the following command in the seafile-server-latest
directory to make the configuration take effect:
./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n
can_drag_drop_folder_to_sync
: allow or deny user to sync folder by draging and droping
can_export_files_via_mobile_client
: allow or deny user to export files in using mobile client
Seafile comes with two build-in roles default
and guest
, a default user is a normal user with permissions as followings:
'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 0, # unit: kb/s\n 'download_rate_limit': 0,\n },\n
While a guest user can only read files/folders in the system, here are the permissions for a guest user:
'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 0,\n 'download_rate_limit': 0,\n },\n
"},{"location":"config/roles_permissions/#edit-build-in-roles","title":"Edit build-in roles","text":"If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to seahub_settings.py
with corresponding permissions set to True
.
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n }\n}\n
"},{"location":"config/roles_permissions/#more-about-guest-invitation-feature","title":"More about guest invitation feature","text":"An user who has can_invite_guest
permission can invite people outside of the organization as guest.
In order to use this feature, in addition to granting can_invite_guest
permission to the user, add the following line to seahub_settings.py
,
ENABLE_GUEST_INVITATION = True\n\n# invitation expire time\nINVITATIONS_TOKEN_AGE = 72 # hours\n
After restarting, users who have can_invite_guest
permission will see \"Invite People\" section at sidebar of home page.
Users can invite a guest user by providing his/her email address, system will email the invite link to the user.
Tip
If you want to block certain email addresses for the invitation, you can define a blacklist, e.g.
INVITATION_ACCEPTER_BLACKLIST = [\"a@a.com\", \"*@a-a-a.com\", r\".*@(foo|bar).com\", ]\n
After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed.
"},{"location":"config/roles_permissions/#add-custom-roles","title":"Add custom roles","text":"If you want to add a new role and assign some users with this role, e.g. new role employee
can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n },\n 'employee': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 500,\n 'download_rate_limit': 800,\n },\n}\n
"},{"location":"config/saml2_in_10.0/","title":"SAML 2.0 in version 10.0+","text":"In this document, we use Microsoft Azure SAML single sign-on app and Microsoft on-premise ADFS to show how Seafile integrate SAML 2.0. Other SAML 2.0 provider should be similar.
"},{"location":"config/saml2_in_10.0/#preparations-for-saml-20","title":"Preparations for SAML 2.0","text":"First, install xmlsec1 package:
$ apt update\n$ apt install xmlsec1\n$ apt install dnsutils # For multi-tenancy feature\n
Second, prepare SP(Seafile) certificate directory and SP certificates:
Create certs dir
$ mkdir -p /opt/seafile/seahub-data/certs\n
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
$ cd /opt/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout sp.key -out sp.crt\n
The days
option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users.
Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.:
Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.:
Next, download the base64 format SAML app's certificate and rename to idp.crt:
and put it under the certs directory(/opt/seafile/seahub-data/certs
).
Next, copy the metadata URL of the SAML app:
and paste it into the SAML_REMOTE_METADATA_URL
option in seahub_settings.py, e.g.:
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Next, add ENABLE_ADFS_LOGIN
, LOGIN_REDIRECT_URL
and SAML_ATTRIBUTE_MAPPING
options to seahub_settings.py, and then restart Seafile, e.g:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Note
/usr/bin/xmlsec1
, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n
View where the xmlsec1 binary is located:
$ which xmlsec1\n
/opt/seafile/seahub-data/certs
, you need to add the following configuration in seahub_settings.py:SAML_CERTS_DIR = '/path/to/certs'\n
Finally, open the browser and enter the Seafile login page, click Single Sign-On
, and use the user assigned to SAML app to perform a SAML login test.
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
First, please make sure the following preparations are done:
A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use temp.adfs.com
as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com
as the domain name example.
Second, download the base64 format certificate and upload it:
Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates.
Locate the Token-signing certificate. Right-click the certificate and select View Certificate.
In the dialog box, select the Details tab.
Click Copy to File.
In the Certificate Export Wizard that opens, click Next.
Select Base-64 encoded X.509 (.CER), then click Next.
Named it idp.crt, then click Next.
Click Finish to complete the download.
And then put it under the certs directory(/opt/seafile/seahub-data/certs
).
Next, add the following configurations to seahub_settings.py and then restart Seafile:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n
Next, add relying party trust:
Log into the ADFS server and open the ADFS management.
Under Actions, click Add Relying Party Trust.
On the Welcome page, choose Claims aware and click Start.
Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: https://example.com/saml2/metadata/
, e.g.:
On the Specify Display Name page type a name in Display name, e.g. Seafile
, under Notes type a description for this relying party trust, and then click Next.
In the Choose an access control policy window, select Permit everyone, then click Next.
Review your settings, then click Next.
Click Close.
Next, create claims rules:
Open the ADFS management, click Relying Party Trusts.
Right-click your trust, and then click Edit Claim Issuance Policy.
On the Issuance Transform Rules tab click Add Rules.
Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next.
In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish.
Click Add Rule again.
Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next.
In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Seafile Claim rule
). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.
Click OK to add both new rules.
When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service
Finally, open the browser and enter the Seafile login page, click Single Sign-On
to perform ADFS login test.
In the file seafevents.conf
:
[DATABASE]\ntype = mysql\nhost = 192.168.0.2\nport = 3306\nusername = seafile\npassword = password\nname = seahub_db\n\n[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = false\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n
"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[REDIS]\n## redis use the 0 database and \"repo_update\" channel\nserver = 192.168.1.1\nport = 6379\npassword = q!1w@#123\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n
"},{"location":"config/seafile-conf/","title":"Seafile.conf settings","text":"Important
Every entry in this configuration file is case-sensitive.
You need to restart seafile and seahub so that your changes take effect.
./seahub.sh restart\n./seafile.sh restart\n
"},{"location":"config/seafile-conf/#storage-quota-setting","title":"Storage Quota Setting","text":"You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to seafile.conf
file
[quota]\n# default user quota in GB, integer only\ndefault = 2\n
This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page.
Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default.
[quota]\nlibrary_file_limit = 100000\n
"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"If you don't want to keep all file revision history, you may set a default history length limit for all libraries.
[history]\nkeep_days = days of history to keep\n
"},{"location":"config/seafile-conf/#default-trash-expiration-time","title":"Default trash expiration time","text":"The default time for automatic cleanup of the libraries trash is 30 days.You can modify this time by adding the following configuration\uff1a
[library_trash]\nexpire_days = 60\n
"},{"location":"config/seafile-conf/#system-trash","title":"System Trash","text":"Seafile uses a system trash, where deleted libraries will be moved to. In this way, accidentally deleted libraries can be recovered by system admin.
"},{"location":"config/seafile-conf/#cache-pro-edition-only","title":"Cache (Pro Edition Only)","text":"Seafile Pro Edition uses memory caches in various cases to improve performance. Some session information is also saved into memory cache to be shared among the cluster nodes. Memcached or Reids can be use for memory cache.
If you use memcached:
[memcached]\n# Replace `localhost` with the memcached address:port if you're using remote memcached\n# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.\nmemcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100\n
If you use redis:
[redis]\n# your redis server address\nredis_host = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n
Redis support is added in version 11.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
"},{"location":"config/seafile-conf/#seafile-fileserver-configuration","title":"Seafile fileserver configuration","text":"The configuration of seafile fileserver is in the [fileserver]
section of the file seafile.conf
[fileserver]\n# bind address for fileserver\n# default to 0.0.0.0, if deployed without proxy: no access restriction\n# set to 127.0.0.1, if used with local proxy: only access by local\nhost = 127.0.0.1\n# tcp port for fileserver\nport = 8082\n
Since Community Edition 6.2 and Pro Edition 6.1.9, you can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases.
[fileserver]\nworker_threads = 15\n
Change upload/download settings.
[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n
After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing:
[fileserver]\nmax_indexing_threads = 10\n
When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here.
[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n
When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value.
[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n
You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it.
[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n
The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago.
[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\n
New in Seafile Pro 7.1.16 and Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout
configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server.
Since Pro 8.0.4 version, you can set both options to -1, to allow unlimited size and timeout.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation.
use_block_cache
option in the [fileserver]
group. It's not enabled by default. block_cache_size_limit
option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache
directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.block_cache_file_types
configuration is used to choose the file types that are cached. block_cache_file_types
the default value is mp4;mov.[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\n
When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the skip_block_hash
option to use a random string as block ID. Warning
This option will prevent fsck from checking block content integrity. You should specify --shallow
option to fsck to not check content integrity.
[fileserver]\nskip_block_hash = true\n
If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set file_ext_white_list
option in the [fileserver]
group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default.
[fileserver]\nfile_ext_white_list = md;mp4;mov\n
Since seafile 10.0.1, when you use go fileserver, you can set upload_limit
and download_limit
option in the [fileserver]
group to limit the speed of file upload and download. It's not enabled by default.
[fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n
Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan.
[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n
"},{"location":"config/seafile-conf/#database-configuration","title":"Database configuration","text":"The configurations of database are stored in the [database]
section.
From Seafile 11.0, the SQLite is not supported
[database]\ntype=mysql\nhost=127.0.0.1\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nmax_connections=100\n
When you configure seafile server to use MySQL, the default connection pool size is 100, which should be enough for most use cases.
Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options:
[database]\nuse_ssl = true\nskip_verify = false\nca_path = /etc/mysql/ca.pem\n
When set use_ssl
to true and skip_verify
to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path
. The ca_path
is a trusted CA certificate path for signing MySQL server certificates. When skip_verify
is true, there is no need to add the ca_path
option. The MySQL server certificate won't be verified at this time.
The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file.
[file_lock]\ndefault_expire_hours = 6\n
The default is 12 hours.
Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). Since Pro Edition 12, this option is enabled by default.
[file_lock]\nuse_locked_file_cache = true\n
At the same time, you also need to configure the following memcache options for the cache to take effect:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"config/seafile-conf/#storage-backends","title":"Storage Backends","text":"You may configure Seafile to use various kinds of object storage backends.
You may also configure Seafile to use multiple storage backends at the same time.
"},{"location":"config/seafile-conf/#cluster","title":"Cluster","text":"When you deploy Seafile in a cluster, you should add the following configuration:
[cluster]\nenabled = true\n
Tip
Since version 12, if you use Docker to deploy cluster, this option is no longer needed.
"},{"location":"config/seafile-conf/#enable-slow-log","title":"Enable Slow Log","text":"Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default.
If you want to configure related options, add the options to seafile.conf:
[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\n
You can find seafile_slow_rpc.log
in logs/slow_logs
. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2
to seaf-server
process. The slow log file will be closed and reopened.
Since 9.0.2 Pro, the signal to trigger log rotation has been changed to SIGUSR1
. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.
Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver.
To enable access log, add below options to seafile.conf:
[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\n
The log format is as following:
start time - user id - url - response code - process time\n
You can use SIGUSR1
to trigger log rotation.
Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf:
[fileserver]\nuse_go_fileserver = true\n
Go fileserver has 3 advantages over the traditional fileserver implemented in C language:
max_sync_file_count
to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.max_download_dir_size
is thus no longer needed by Go fileserver.Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options.
[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n
"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options:
# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n
This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration.
go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n
"},{"location":"config/seafile-conf/#notification-server-configuration","title":"Notification server configuration","text":"Since Seafile 10.0.0, you can ask Seafile server to send notifications (file changes, lock changes and folder permission changes) to Notification Server component.
[notification]\nenabled = true\n# IP address of the server running notification server\n# or \"notification-server\" if you are running notification server container on the same host as Seafile server\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
Tip
The configuration here only works for version >= 12.0. The configuration for notificaton server has been changed in 12.0 to make it clearer. The new configuration is not compatible with older versions.
"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":"Create customize folder
Deploy in DockerDeploy from binary packagesmkdir -p /opt/seafile-data/seahub/media/custom\n
mkdir /opt/seafile/seafile-server-latest/seahub/media/custom\n
During upgrading, Seafile upgrade script will create symbolic link automatically to preserve your customization.
"},{"location":"config/seahub_customization/#customize-logo","title":"Customize Logo","text":"Add your logo file to custom/
Overwrite LOGO_PATH
in seahub_settings.py
LOGO_PATH = 'custom/mylogo.png'\n
Default width and height for logo is 149px and 32px, you may need to change that according to yours.
LOGO_WIDTH = 149\nLOGO_HEIGHT = 32\n
"},{"location":"config/seahub_customization/#customize-favicon","title":"Customize Favicon","text":"Add your favicon file to custom/
Overwrite FAVICON_PATH
in seahub_settings.py
LOGO_PATH = 'custom/favicon.png'\n
"},{"location":"config/seahub_customization/#customize-seahub-css","title":"Customize Seahub CSS","text":"Add your css file to custom/
, for example, custom.css
Overwrite BRANDING_CSS
in seahub_settings.py
LOGO_PATH = 'custom/custom.css'\n
"},{"location":"config/seahub_customization/#customize-help-page","title":"Customize help page","text":"Deploy in DockerDeploy from binary packages mkdir -p /opt/seafile-data/seahub/media/custom/templates/help/\ncd /opt/seafile-data/seahub/media/custom\ncp ../../help/templates/help/install.html templates/help/\n
mkdir /opt/seafile/seafile-server-latest/seahub/media/custom/templates/help/\ncd /opt/seafile/seafile-server-latest/seahub/media/custom\ncp ../../help/templates/help/install.html templates/help/\n
Modify the templates/help/install.html
file and save it. You will see the new help page.
You can add an extra note in sharing dialog in seahub_settings.py
ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\n
Result:
"},{"location":"config/seahub_customization/#add-custom-navigation-items","title":"Add custom navigation items","text":"Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the conf/seahub_settings.py
configuration file:
CUSTOM_NAV_ITEMS = [\n {'icon': 'sf2-icon-star',\n 'desc': 'Custom navigation 1',\n 'link': 'https://www.seafile.com'\n },\n {'icon': 'sf2-icon-wiki-view',\n 'desc': 'Custom navigation 2',\n 'link': 'https://www.seafile.com/help'\n },\n {'icon': 'sf2-icon-wrench',\n 'desc': 'Custom navigation 3',\n 'link': 'http://www.example.com'\n },\n]\n
Note
The icon
field currently only supports icons in Seafile that begin with sf2-icon
. You can find the list of icons here:
Then restart the Seahub service to take effect.
Once you log in to the Seafile system homepage again, you will see the new navigation entry under the Tools
navigation bar on the left.
ADDITIONAL_APP_BOTTOM_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/web'\n}\n
Result:
"},{"location":"config/seahub_customization/#add-more-links-to-about-dialog","title":"Add more links to about dialog","text":"ADDITIONAL_ABOUT_DIALOG_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/dtable-web'\n}\n
Result:
"},{"location":"config/seahub_settings_py/","title":"Seahub Settings","text":"Tip
You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add ENABLE_SETTINGS_VIA_WEB = False
to seahub_settings.py
.
Refer to email sending documentation.
"},{"location":"config/seahub_settings_py/#cache","title":"Cache","text":"Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis.
MemcachedRedis# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n
Add the following configuration to seahub_settings.py
.
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
Redis supported is added in Seafile version 11.0
Install Redis with package installers in your OS.
Please refer to Django's documentation about using Redis cache.
# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n
"},{"location":"config/seahub_settings_py/#user-management-options","title":"User management options","text":"The following options affect user registration, password and session.
# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# mininum length for user's password\nUSER_PASSWORD_MIN_LENGTH = 6\n\n# LEVEL based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nUSER_PASSWORD_STRENGTH_LEVEL = 3\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# Whether enable the feature \"published library\". Default is `False`\n# Since 6.1.0 CE\nENABLE_WIKI = True\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n
"},{"location":"config/seahub_settings_py/#library-snapshot-label-feature","title":"Library snapshot label feature","text":"# Turn on this option to let users to add a label to a library snapshot. Default is `False`\nENABLE_REPO_SNAPSHOT_LABEL = False\n
"},{"location":"config/seahub_settings_py/#library-options","title":"Library options","text":"Options for libraries:
# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not recommended any more.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# Since version 12, you can choose password hash algorithm for new encrypted libraries.\n# The password is used to encrypt the encryption key. So using a secure password hash algorithm to\n# prevent brute-force password guessing is important.\n# Before version 12, a fixed algorithm (PBKDF2-SHA256 with 1000 iterations) is used.\n#\n# Currently two hash algorithms are supported.\n# - PBKDF2: The only available parameter is the number of iterations. You need to increase the\n# the number of iterations over time, as GPUs are more and more used for such calculation.\n# The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.\n# - Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that\n# can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas,\n# e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm\n# on https://github.com/P-H-C/phc-winner-argon2 .\n#\n# Note that only sync client >= 9.0.9 and SeaDrive >= 3.0.12 supports syncing libraries created with these algorithms.\nENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"argon2id\"\nENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"2,102400,8\"\n# ENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"pbkdf2_sha256\"\n# ENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"600000\"\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n
Options for online file preview:
# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n\n# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first.\n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails.html\n# NOTE: this option is deprecated in version 7.1\nENABLE_VIDEO_THUMBNAIL = False\n\n# Use the frame at 5 second as thumbnail\n# NOTE: this option is deprecated in version 7.1\nTHUMBNAIL_VIDEO_FRAME_TIME = 5\n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n\n# Default size for picture preview. Enlarge this size can improve the preview quality.\n# NOTE: since version 6.1.1\nTHUMBNAIL_SIZE_FOR_ORIGINAL = 1024\n
"},{"location":"config/seahub_settings_py/#cloud-mode","title":"Cloud Mode","text":"You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it.
# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n
"},{"location":"config/seahub_settings_py/#single-sign-on","title":"Single Sign On","text":"# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS instead of email and password\n# Default is False\n# Since 11.0.7\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n
"},{"location":"config/seahub_settings_py/#other-options","title":"Other options","text":"# This is outside URL for Seahub(Seafile Web). \n# The domain part (i.e., www.example.com) will be used in generating share links and download/upload file via web.\n# Note: SERVICE_URL is moved to seahub_settings.py since 9.0.0\n# Note: SERVICE_URL is no longer used since version 12.0\n# SERVICE_URL = 'https://seafile.example.com:'\n\n# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# If you don't want to run seahub website on your site's root path, set this option to your preferred path.\n# e.g. setting it to '/seahub/' would run seahub on http://example.com/seahub/.\nSITE_ROOT = '/'\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'http{s}://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable user select a template when he/she creates library.\n# When user select a template, Seafile will create folders releated to the pattern automaticly.\n# Since version 6.0\nLIBRARY_TEMPLATES = {\n 'Technology': ['/Develop/Python', '/Test'],\n 'Finance': ['/Current assets', '/Fixed assets/Computer']\n}\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n
"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"# Whether to show the used traffic in user's profile popup dialog. Default is True\nSHOW_TRAFFIC = True\n\n# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n
"},{"location":"config/seahub_settings_py/#restful-api","title":"RESTful API","text":"# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/12.0/setup_binary/ce/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\n
"},{"location":"config/seahub_settings_py/#seahub-custom-functions","title":"Seahub Custom Functions","text":"Since version 6.2, you can define a custom function to modify the result of user search function.
For example, if you want to limit user only search users in the same institution, you can define custom_search_user
function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\n
You should NOT change the name of custom_search_user
and seahub_custom_functions/__init__.py
Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to.
For example, if you want to let a user to share library to both its groups and the groups of user test@test.com
, you can define a custom_get_groups
function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\n
You should NOT change the name of custom_get_groups
and seahub_custom_functions/__init__.py
Tip
docker compose restart\n
cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n
There are currently five types of emails sent in Seafile:
The first four types of email are sent immediately. The last type is sent by a background task running periodically.
"},{"location":"config/sending_email/#options-of-email-sending","title":"Options of Email Sending","text":"Please add the following lines to seahub_settings.py
to enable email sending.
EMAIL_USE_TLS = True\nEMAIL_HOST = 'smtp.example.com' # smpt server\nEMAIL_HOST_USER = 'username@example.com' # username and domain\nEMAIL_HOST_PASSWORD = 'password' # password\nEMAIL_PORT = 587\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\n
Note
If your email service still does not work, you can checkout the log file logs/seahub.log
to see what may cause the problem. For a complete email notification list, please refer to email notification list.
If you want to use the email service without authentication leaf EMAIL_HOST_USER
and EMAIL_HOST_PASSWORD
blank (''
). (But notice that the emails then will be sent without a From:
address.)
About using SSL connection (using port 465)
EMAIL_USE_SSL = True
instead of EMAIL_USE_TLS
.reply to
of email","text":"You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link.
# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\n
"},{"location":"config/sending_email/#config-background-email-sending-task","title":"Config background email sending task","text":"The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by seafevents.conf
.
[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n
"},{"location":"config/sending_email/#customize-email-messages","title":"Customize email messages","text":"The simplest way to customize the email messages is setting the SITE_NAME
variable in seahub_settings.py
. If it is not enough for your case, you can customize the email templates.
Tip
Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect.
"},{"location":"config/sending_email/#the-email-base-template","title":"The email base template","text":"seahub/seahub/templates/email_base.html
Tip
You can copy email_base.html to seahub-data/custom/templates/email_base.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/auth/forms.py line:127
send_html_email(_(\"Reset Password on %s\") % site_name,\n email_template_name, c, None, [user.username])\n
Body
seahub/seahub/templates/registration/password_reset_email.html
Tip
You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
Body
seahub/seahub/templates/sysadmin/user_add_email.html
Tip
You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:1224
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Tip
You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:913
try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\n
Body
seahub/seahub/templates/shared_link_email.html
seahub/seahub/templates/shared_upload_link_email.html
Tip
You can copy shared_link_email.html to seahub-data/custom/templates/shared_link_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n
Body
seahub/seahub/notifications/templates/notifications/notice_email.html
"},{"location":"config/shibboleth_authentication/","title":"Shibboleth Authentication","text":"Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider.
In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview .
Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user.
Seahub provides a special URL to handle Shibboleth login. The URL is https://your-seafile-domain/sso
. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
https://your-seafile-domain/sso
.https://your-seafile-domain/sso
.HTTP_REMOTE_USER
header) and brings the user to her/his home page.Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL https://your-seafile-domain/sso
needs to be directed to Apache.
The configuration includes 3 steps:
We use CentOS 7 as example.
"},{"location":"config/shibboleth_authentication/#configure-apache","title":"Configure Apache","text":"You should create a new virtual host configuration for Shibboleth. And then restart Apache.
<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName your-seafile-domain\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n ErrorLog ${APACHE_LOG_DIR}/seahub.error.log\n CustomLog ${APACHE_LOG_DIR}/seahub.access.log combined\n\n SSLEngine on\n SSLCertificateFile /path/to/ssl-cert.pem\n SSLCertificateKeyFile /path/to/ssl-key.pem\n\n <Location /Shibboleth.sso>\n SetHandler shib\n AuthType shibboleth\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n <Location /sso>\n SetHandler shib\n AuthType shibboleth\n ShibUseHeaders On\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n RewriteEngine On\n <Location /media>\n Require all granted\n </Location>\n\n # seafile fileserver\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n\n # for http\n # RequestHeader set REMOTE_USER %{REMOTE_USER}e\n # for https\n RequestHeader set REMOTE_USER %{REMOTE_USER}s\n </VirtualHost>\n</IfModule>\n
"},{"location":"config/shibboleth_authentication/#install-and-configure-shibboleth","title":"Install and Configure Shibboleth","text":"Installation and configuration of Shibboleth is out of the scope of this documentation. You can refer to the official Shibboleth document.
"},{"location":"config/shibboleth_authentication/#configure-shibbolethsp","title":"Configure Shibboleth(SP)","text":""},{"location":"config/shibboleth_authentication/#shibboleth2xml","title":"shibboleth2.xml","text":"Open /etc/shibboleth/shibboleth2.xml
and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
ApplicationDefaults
element","text":"Change entityID
and REMOTE_USER
property:
<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\n
Seahub extracts the username from the REMOTE_USER
environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER
environment variable.
In Seafile, only one of the following two attributes can be used for username: eppn
, and mail
. eppn
stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail
is the user's email address. You should set REMOTE_USER
to either one of these attributes.
SSO
element","text":"Change entityID
property:
<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\n
"},{"location":"config/shibboleth_authentication/#metadataprovider-element","title":"MetadataProvider
element","text":"Change url
and backingFilePath
property:
<!-- Example of remotely supplied batch of signed metadata. -->\n<MetadataProvider type=\"XML\" validate=\"true\"\n url=\"http://your-IdP-metadata-url\"\n backingFilePath=\"your-IdP-metadata.xml\" maxRefreshDelay=\"7200\">\n <MetadataFilter type=\"RequireValidUntil\" maxValidityInterval=\"2419200\"/>\n <MetadataFilter type=\"Signature\" certificate=\"fedsigner.pem\" verifyBackup=\"false\"/>\n
"},{"location":"config/shibboleth_authentication/#attribute-mapxml","title":"attribute-map.xml","text":"Open /etc/shibboleth/attribute-map.xml
and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
Attribute
element","text":"Uncomment attribute elements for getting more user info:
<!-- Older LDAP-defined attributes (SAML 2.0 names followed by SAML 1 names)... -->\n<Attribute name=\"urn:oid:2.16.840.1.113730.3.1.241\" id=\"displayName\"/>\n<Attribute name=\"urn:oid:0.9.2342.19200300.100.1.3\" id=\"mail\"/>\n\n<Attribute name=\"urn:mace:dir:attribute-def:displayName\" id=\"displayName\"/>\n<Attribute name=\"urn:mace:dir:attribute-def:mail\" id=\"mail\"/>\n
"},{"location":"config/shibboleth_authentication/#upload-shibbolethsps-metadata","title":"Upload Shibboleth(SP)'s metadata","text":"After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server.
"},{"location":"config/shibboleth_authentication/#configure-seahub","title":"Configure Seahub","text":"Add the following configuration to seahub_settings.py.
ENABLE_SHIB_LOGIN = True\nSHIBBOLETH_USER_HEADER = 'HTTP_REMOTE_USER'\n# basic user attributes\nSHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_DISPLAYNAME\": (False, \"display_name\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n}\nEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_AUTHENTICATION_BACKENDS = (\n 'shibboleth.backends.ShibbolethRemoteUserBackend',\n)\n
Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are:
You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py:
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n}\n
In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs.
You may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP
We also added an option SHIB_ACTIVATE_AFTER_CREATION
(defaults to True
) which control the user status after shibboleth connection. If this option set to False
, user will be inactive after connection, and system admins will be notified by email to activate that account.
Shibboleth has a field called affiliation. It is a list like: employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.
We are able to set user role from Shibboleth. Details about user role, please refer to Roles and Permissions
To enable this, modify SHIBBOLETH_ATTRIBUTE_MAP
above and add Shibboleth-affiliation
field, you may need to change Shibboleth-affiliation
according to your Shibboleth SP attributes.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n \"HTTP_Shibboleth-affiliation\": (False, \"affiliation\"),\n}\n
Then add new config to define affiliation role map,
SHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n
After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP.
"},{"location":"config/shibboleth_authentication/#verify","title":"Verify","text":"After restarting Apache and Seahub service (./seahub.sh restart
), you can then test the shibboleth login workflow.
If you encountered problems when login, follow these steps to get debug info (for Seafile pro 6.3.13).
"},{"location":"config/shibboleth_authentication/#add-this-setting-to-seahub_settingspy","title":"Add this setting toseahub_settings.py
","text":"DEBUG = True\n
"},{"location":"config/shibboleth_authentication/#change-seafiles-code","title":"Change Seafile's code","text":"Open seafile-server-latest/seahub/thirdpart/shibboleth/middleware.py
Insert the following code in line 59
assert False\n
Insert the following code in line 65
if not username:\n assert False\n
The complete code after these changes is as follows:
#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n
Then restart Seafile and relogin, you will see debug info in web page.
"},{"location":"config/single_sign_on/","title":"Single Sign On support in Seafile","text":"Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition.
In the Community Edition:
Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows.
In Pro Edition:
Build Seafile
Seafile Open API
Seafile Implement Details
You can build Seafile from our source code package or from the Github repo directly.
Client
Server
Seafile internally uses a data model similar to GIT's. It consists of Repo
, Commit
, FS
, and Block
.
Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details.
"},{"location":"develop/data_model/#repo","title":"Repo","text":"A repo is also called a library. Every repo has an unique id (UUID), and attributes like description, creator, password.
The metadata for a repo is stored in seafile_db
database and the commit objects (see description in later section).
There are a few tables in the seafile_db
database containing important information about each repo.
Repo
: contains the ID for each repo.RepoOwner
: contains the owner id for each repo.RepoInfo
: it is a \"cache\" table for fast access to repo metadata stored in the commit object. It includes repo name, update time, last modifier.RepoSize
: the total size of all files in the repo.RepoFileCount
: the file count in the repo.RepoHead
: contains the \"head commit ID\". This ID points to the head commit in the storage, which will be described in the next section.Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID.
The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo.
The parent commit ID points to the last commit previous to the current commit. The RepoHead
table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/commits/<repo_id>
. If you use object storage, commit objects are stored in the commits
bucket.
There are two types of FS objects, SeafDir Object
and Seafile Object
. SeafDir Object
represents a directory, and Seafile Object
represents a file.
The SeafDir
object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir
or Seafile
object. The Seafile
object contains a block list, which is a list of block IDs for the file.
The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/fs/<repo_id>
. If you use object storage, commit objects are stored in the fs
bucket.
A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB.
This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/blocks/<repo_id>
. If you use object storage, commit objects are stored in the blocks
bucket.
A \"virtual repo\" is a special repo that will be created in the cases below:
A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same fs
and blocks
storage location as its parent.
Virtual repo has its own change history. So it has separate commits
storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.
There is a VirtualRepo
table in seafile_db
database. It contains the folder path in the parent repo for each virtual repo.
The following list is what you need to install on your development machine. You should install all of them before you build Seafile.
Package names are according to Ubuntu 14.04. For other Linux distros, please find their corresponding names yourself.
sudo apt-get install autoconf automake libtool libevent-dev libcurl4-openssl-dev libgtk2.0-dev uuid-dev intltool libsqlite3-dev valac libjansson-dev cmake qtchooser qtbase5-dev libqt5webkit5-dev qttools5-dev qttools5-dev-tools libssl-dev libargon2-dev\n
$ sudo yum install wget gcc libevent-devel openssl-devel gtk2-devel libuuid-devel sqlite-devel jansson-devel intltool cmake libtool vala gcc-c++ qt5-qtbase-devel qt5-qttools-devel qt5-qtwebkit-devel libcurl-devel openssl-devel argon2-devel\n
"},{"location":"develop/linux/#building","title":"Building","text":"First you should get the latest source of libsearpc/ccnet/seafile/seafile-client:
Download the source tarball of the latest tag from
For example, if the latest released seafile client is 8.0.0, then just use the v8.0.0 tags of the four projects. You should get four tarballs:
# without alias wget= might not work\nshopt -s expand_aliases\n\nexport version=8.0.0\nalias wget='wget --content-disposition -nc'\nwget https://github.com/haiwen/libsearpc/archive/v3.2-latest.tar.gz\nwget https://github.com/haiwen/ccnet/archive/v${version}.tar.gz \nwget https://github.com/haiwen/seafile/archive/v${version}.tar.gz\nwget https://github.com/haiwen/seafile-client/archive/v${version}.tar.gz\n
Now uncompress them:
tar xf libsearpc-3.2-latest.tar.gz\ntar xf ccnet-${version}.tar.gz\ntar xf seafile-${version}.tar.gz\ntar xf seafile-client-${version}.tar.gz\n
To build Seafile client, you need first build libsearpc and ccnet, seafile.
"},{"location":"develop/linux/#set-paths","title":"set paths","text":"export PREFIX=/usr\nexport PKG_CONFIG_PATH=\"$PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\n
"},{"location":"develop/linux/#libsearpc","title":"libsearpc","text":"cd libsearpc-3.2-latest\n./autogen.sh\n./configure --prefix=$PREFIX\nmake\nsudo make install\ncd ..\n
"},{"location":"develop/linux/#seafile","title":"seafile","text":"In order to support notification server, you need to build libwebsockets first.
git clone --branch=v4.3.0 https://github.com/warmcat/libwebsockets\ncd libwebsockets\nmkdir build\ncd build\ncmake ..\nmake\nsudo make install\ncd ..\n
You can set --enable-ws
to no to disable notification server. After that, you can build seafile:
cd seafile-${version}/\n./autogen.sh\n./configure --prefix=$PREFIX --disable-fuse\nmake\nsudo make install\ncd ..\n
"},{"location":"develop/linux/#seafile-client","title":"seafile-client","text":"cd seafile-client-${version}\ncmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PREFIX .\nmake\nsudo make install\ncd ..\n
"},{"location":"develop/linux/#custom-prefix","title":"custom prefix","text":"when installing to a custom $PREFIX
, i.e. /opt
, you may need a script to set the path variables correctly
cat >$PREFIX/bin/seafile-applet.sh <<END\n#!/bin/bash\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexec seafile-applet $@\nEND\ncat >$PREFIX/bin/seaf-cli.sh <<END\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexport PYTHONPATH=$PREFIX/lib/python2.7/site-packages\nexec seaf-cli $@\nEND\nchmod +x $PREFIX/bin/seafile-applet.sh $PREFIX/bin/seaf-cli.sh\n
you can now start the client with $PREFIX/bin/seafile-applet.sh
.
The following setups are required for building and packaging Sync Client on macOS:
universal_archs arm64 x86_64
. Specifies the architecture on which MapPorts is compiled.+universal
. MacPorts installs universal versions of all ports.sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets argon2
.export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/local/lib/pkgconfig:/usr/local/lib/pkgconfig\nexport PATH=/opt/local/bin:/usr/local/bin:/opt/local/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH\nexport LDFLAGS=\"-L/opt/local/lib -L/usr/local/lib\"\nexport CFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport CPPFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:/opt/local/lib/:/usr/local/lib/:$LD_LIBRARY_PATH\n\nQT_BASE=$HOME/Qt/6.2.4/macos\nexport PATH=$QT_BASE/bin:$PATH\nexport PKG_CONFIG_PATH=$QT_BASE/lib/pkgconfig:$PKG_CONFIG_PATH\nexport NOTARIZE_APPLE_ID=\"Your notarize account\"\nexport NOTARIZE_PASSWORD=\"Your notarize password\"\nexport NOTARIZE_TEAM_ID=\"Your notarize team id\"\n
Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\n
The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, and github.com/haiwen/seafile-client.
"},{"location":"develop/osx/#building","title":"Building","text":"Note: the building commands have been included in the packaging script, you can skip building commands while packaging.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ ./autogen.sh\n$ ./configure --disable-compile-demo --enable-compile-universal=yes\n$ make\n$ make install\n
To build seafile:
$ cd seafile-workspace/seafile/\n$ ./autogen.sh\n$ ./configure --disable-fuse --enable-compile-universal=yes\n$ make\n$ make install\n
To build seafile-client:
$ cd seafile-workspace/seafile-client/\n$ cmake -GXcode -B. -S.\n$ xcodebuild -target seafile-applet -configuration Release\n
"},{"location":"develop/osx/#packaging","title":"Packaging","text":"python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universal
From Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions.
The seafile-build.sh
compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
Old version is below:
Table of contents:
Requirements:
sudo apt-get install build-essential\nsudo apt-get install libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev re2c flex python-setuptools cmake\n
"},{"location":"develop/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"develop/rpi/#libevhtp","title":"libevhtp","text":"libevhtp is a http server libary on top of libevent. It's used in seafile file server.
git clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\n
After compiling all the libraries, run ldconfig
to update the system libraries cache:
sudo ldconfig\n
"},{"location":"develop/rpi/#install-python-libraries","title":"Install python libraries","text":"Create a new directory /home/pi/dev/seahub_thirdpart
:
mkdir -p ~/dev/seahub_thirdpart\n
Download these tarballs to /tmp/
:
Install all these libaries to /home/pi/dev/seahub_thirdpart
:
cd ~/dev/seahub_thirdpart\nexport PYTHONPATH=.\npip install -t ~/dev/seahub_thirdpart/ /tmp/pytz-2016.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/Django-1.8.10.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-statici18n-1.1.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/djangorestframework-3.3.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_compressor-1.4.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/jsonfield-1.0.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-post_office-2.0.6.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/gunicorn-19.4.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/flup-1.0.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/chardet-2.3.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/python-dateutil-1.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/six-1.9.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-picklefield-0.3.2.tar.gz\nwget -O /tmp/django_constance.zip https://github.com/haiwen/django-constance/archive/bde7f7c.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_constance.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/jdcal-1.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/et_xmlfile-1.0.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/openpyxl-2.3.0.tar.gz\n
"},{"location":"develop/rpi/#prepare-seafile-source-code","title":"Prepare seafile source code","text":"To build seafile server, there are four sub projects involved:
The build process has two steps:
build-server.py
script to build the server package from the source tarballs.Seafile manages the releases in tags on github.
Assume we are packaging for seafile server 6.0.1, then the tags are:
v6.0.1-sever
tag.v3.0-latest
tag (libsearpc has been quite stable and basically has no further development, so the tag is always v3.0-latest
)First setup the PKG_CONFIG_PATH
enviroment variable (So we don't need to make and make install libsearpc/ccnet/seafile into the system):
export PKG_CONFIG_PATH=/home/pi/dev/seafile/lib:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/libsearpc:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/ccnet:$PKG_CONFIG_PATH\n
"},{"location":"develop/rpi/#libsearpc","title":"libsearpc","text":"cd ~/dev\ngit clone https://github.com/haiwen/libsearpc.git\ncd libsearpc\ngit reset --hard v3.0-latest\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"develop/rpi/#ccnet","title":"ccnet","text":"cd ~/dev\ngit clone https://github.com/haiwen/ccnet-server.git\ncd ccnet\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"develop/rpi/#seafile","title":"seafile","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafile-server.git\ncd seafile\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"develop/rpi/#seahub","title":"seahub","text":"cd ~/dev\ngit clone https://github.com/haiwen/seahub.git\ncd seahub\ngit reset --hard v6.0.1-server\n./tools/gen-tarball.py --version=6.0.1 --branch=HEAD\n
"},{"location":"develop/rpi/#seafobj","title":"seafobj","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafobj.git\ncd seafobj\ngit reset --hard v6.0.1-server\nmake dist\n
"},{"location":"develop/rpi/#seafdav","title":"seafdav","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafdav.git\ncd seafdav\ngit reset --hard v6.0.1-server\nmake\n
"},{"location":"develop/rpi/#copy-the-source-tar-balls-to-the-same-folder","title":"Copy the source tar balls to the same folder","text":"mkdir ~/seafile-sources\ncp ~/dev/libsearpc/libsearpc-<version>-tar.gz ~/seafile-sources\ncp ~/dev/ccnet/ccnet-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seafile/seafile-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seahub/seahub-<version>-tar.gz ~/seafile-sources\n\ncp ~/dev/seafobj/seafobj.tar.gz ~/seafile-sources\ncp ~/dev/seafdav/seafdav.tar.gz ~/seafile-sources\n
"},{"location":"develop/rpi/#run-the-packaging-script","title":"Run the packaging script","text":"Now we have all the tarballs prepared, we can run the build-server.py
script to build the server package.
mkdir ~/seafile-server-pkgs\n~/dev/seafile/scripts/build-server.py --libsearpc_version=<libsearpc_version> --ccnet_version=<ccnet_version> --seafile_version=<seafile_version> --seahub_version=<seahub_version> --srcdir= --thirdpartdir=/home/pi/dev/seahub_thirdpart --srcdir=/home/pi/seafile-sources --outputdir=/home/pi/seafile-server-pkgs\n
After the script finisheds, we would get a seafile-server_6.0.1_pi.tar.gz
in ~/seafile-server-pkgs
folder.
The test should cover these steps at least:
seafile.sh start
and seahub.sh start
, you can login from a browser.This is the document for deploying Seafile open source development environment in Ubuntu 2204 docker container.
"},{"location":"develop/server/#create-persistent-directories","title":"Create persistent directories","text":"Login a linux server as root
user, then:
mkdir -p /root/seafile-ce-docker/source-code\nmkdir -p /root/seafile-ce-docker/conf\nmkdir -p /root/seafile-ce-docker/logs\nmkdir -p /root/seafile-ce-docker/mysql-data\nmkdir -p /root/seafile-ce-docker/seafile-data/library-template\n
"},{"location":"develop/server/#run-a-container","title":"Run a container","text":"After install docker, start a container to deploy seafile open source development environment.
docker run --mount type=bind,source=/root/seafile-ce-docker/source-code,target=/root/dev/source-code \\\n --mount type=bind,source=/root/seafile-ce-docker/conf,target=/root/dev/conf \\\n --mount type=bind,source=/root/seafile-ce-docker/logs,target=/root/dev/logs \\\n --mount type=bind,source=/root/seafile-ce-docker/seafile-data,target=/root/dev/seafile-data \\\n --mount type=bind,source=/root/seafile-ce-docker/mysql-data,target=/var/lib/mysql \\\n -it -p 8000:8000 -p 8082:8082 -p 3000:3000 --name seafile-ce-env ubuntu:22.04 bash\n
Note, the following commands are all executed in the seafile-ce-env docker container.
"},{"location":"develop/server/#update-source-and-install-dependencies","title":"Update Source and Install Dependencies.","text":"Update base system and install base dependencies:
apt-get update && apt-get upgrade -y\n\napt-get install -y ssh libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev python3-dateutil cmake re2c flex sqlite3 python3-pip python3-simplejson git libssl-dev libldap2-dev libonig-dev vim vim-scripts wget cmake gcc autoconf automake mysql-client librados-dev libxml2-dev curl sudo telnet netcat unzip netbase ca-certificates apt-transport-https build-essential libxslt1-dev libffi-dev libpcre3-dev libz-dev xz-utils nginx pkg-config poppler-utils libmemcached-dev sudo ldap-utils libldap2-dev libjwt-dev\n
Install Node 16 from nodesource:
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg\necho \"deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_16.x nodistro main\" | sudo tee /etc/apt/sources.list.d/nodesource.list\napt-get install -y nodejs\n
Install other Python 3 dependencies:
apt-get install -y python3 python3-dev python3-pip python3-setuptools python3-ldap\n\npython3 -m pip install --upgrade pip\n\npip3 install Django==4.2.* django-statici18n==2.3.* django_webpack_loader==1.7.* django_picklefield==3.1 django_formtools==2.4 django_simple_captcha==0.6.* djangosaml2==1.5.* djangorestframework==3.14.* python-dateutil==2.8.* pyjwt==2.6.* pycryptodome==3.16.* python-cas==1.6.* pysaml2==7.2.* requests==2.28.* requests_oauthlib==1.3.* future==0.18.* gunicorn==20.1.* mysqlclient==2.1.* qrcode==7.3.* pillow==10.2.* chardet==5.1.* cffi==1.15.1 captcha==0.5.* openpyxl==3.0.* Markdown==3.4.* bleach==5.0.* python-ldap==3.4.* sqlalchemy==2.0.18 redis mock pytest pymysql configparser pylibmc django-pylibmc nose exam splinter pytest-django\n
"},{"location":"develop/server/#install-mariadb-and-create-databases","title":"Install MariaDB and Create Databases","text":"apt-get install -y mariadb-server\nservice mariadb start\nmysqladmin -u root password your_password\n
sql for create databases
mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n
"},{"location":"develop/server/#download-source-code","title":"Download Source Code","text":"cd ~/\ncd ~/dev/source-code\n\ngit clone https://github.com/haiwen/libevhtp.git\ngit clone https://github.com/haiwen/libsearpc.git\ngit clone https://github.com/haiwen/seafile-server.git\ngit clone https://github.com/haiwen/seafevents.git\ngit clone https://github.com/haiwen/seafobj.git\ngit clone https://github.com/haiwen/seahub.git\n\ncd libevhtp/\ngit checkout tags/1.1.7 -b tag-1.1.7\n\ncd ../libsearpc/\ngit checkout tags/v3.3-latest -b tag-v3.3-latest\n\ncd ../seafile-server\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafevents\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafobj\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seahub\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n
"},{"location":"develop/server/#compile-and-install-seaf-server","title":"Compile and Install seaf-server","text":"cd ../libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nmake install\nldconfig\n\ncd ../libsearpc\n./autogen.sh\n./configure\nmake\nmake install\nldconfig\n\ncd ../seafile-server\n./autogen.sh\n./configure --disable-fuse\nmake\nmake install\nldconfig\n
"},{"location":"develop/server/#create-conf-files","title":"Create Conf Files","text":"cd ~/dev/conf\n\ncat > ccnet.conf <<EOF\n[Database]\nENGINE = mysql\nHOST = localhost\nPORT = 3306\nUSER = root\nPASSWD = 123456\nDB = ccnet\nCONNECTION_CHARSET = utf8\nCREATE_TABLES = true\nEOF\n\ncat > seafile.conf <<EOF\n[database]\ntype = mysql\nhost = localhost\nport = 3306\nuser = root\npassword = 123456\ndb_name = seafile\nconnection_charset = utf8\ncreate_tables = true\nEOF\n\ncat > seafevents.conf <<EOF\n[DATABASE]\ntype = mysql\nusername = root\npassword = 123456\nname = seahub\nhost = localhost\nEOF\n\ncat > seahub_settings.py <<EOF\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'seahub',\n 'USER': 'root',\n 'PASSWORD': '123456',\n 'HOST': 'localhost',\n 'PORT': '3306',\n }\n}\nFILE_SERVER_ROOT = 'http://127.0.0.1:8082'\nSERVICE_URL = 'http://127.0.0.1:8000'\nEOF\n
"},{"location":"develop/server/#start-seaf-server","title":"Start seaf-server","text":"seaf-server -F /root/dev/conf -d /root/dev/seafile-data -l /root/dev/logs/seafile.log >> /root/dev/logs/seafile.log 2>&1 &\n
"},{"location":"develop/server/#start-seafevents-and-seahub","title":"Start seafevents and seahub","text":""},{"location":"develop/server/#prepare-environment-variables","title":"Prepare environment variables","text":"export CCNET_CONF_DIR=/root/dev/conf\nexport SEAFILE_CONF_DIR=/root/dev/seafile-data\nexport SEAFILE_CENTRAL_CONF_DIR=/root/dev/conf\nexport SEAHUB_DIR=/root/dev/source-code/seahub\nexport SEAHUB_LOG_DIR=/root/dev/logs\nexport PYTHONPATH=/usr/local/lib/python3.10/dist-packages/:/usr/local/lib/python3.10/site-packages/:/root/dev/source-code/:/root/dev/source-code/seafobj/:/root/dev/source-code/seahub/thirdpart:$PYTHONPATH\n
"},{"location":"develop/server/#start-seafevents","title":"Start seafevents","text":"cd /root/dev/source-code/seafevents/\npython3 main.py --loglevel=debug --logfile=/root/dev/logs/seafevents.log --config-file /root/dev/conf/seafevents.conf >> /root/dev/logs/seafevents.log 2>&1 &\n
"},{"location":"develop/server/#start-seahub","title":"Start seahub","text":""},{"location":"develop/server/#create-seahub-database-tables","title":"Create seahub database tables","text":"cd /root/dev/source-code/seahub/\npython3 manage.py migrate\n
"},{"location":"develop/server/#create-user","title":"Create user","text":"python3 manage.py createsuperuser\n
"},{"location":"develop/server/#start-seahub_1","title":"Start seahub","text":"python3 manage.py runserver 0.0.0.0:8000\n
Then, you can visit http://127.0.0.1:8000/ to use Seafile.
"},{"location":"develop/server/#the-final-directory-structure","title":"The Final Directory Structure","text":""},{"location":"develop/server/#more","title":"More","text":""},{"location":"develop/server/#deploy-frontend-development-environment","title":"Deploy Frontend Development Environment","text":"For deploying frontend development enviroment, you need:
1, checkout seahub to master branch
cd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\n
2, add the following configration to /root/dev/conf/seahub_settings.py
import os\nPROJECT_ROOT = '/root/dev/source-code/seahub'\nWEBPACK_LOADER = {\n 'DEFAULT': {\n 'BUNDLE_DIR_NAME': 'frontend/',\n 'STATS_FILE': os.path.join(PROJECT_ROOT,\n 'frontend/webpack-stats.dev.json'),\n }\n}\nDEBUG = True\n
3, install js modules
cd /root/dev/source-code/seahub/frontend\n\nnpm install\n
4, npm run dev
cd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n
5, start seaf-server and seahub
"},{"location":"develop/translation/","title":"Translation","text":""},{"location":"develop/translation/#seahub-seafile-server-71-and-above","title":"Seahub (Seafile Server 7.1 and above)","text":""},{"location":"develop/translation/#translate-and-try-locally","title":"Translate and try locally","text":"1. Locate the translation files in the seafile-server-latest/seahub directory:
/locale/<lang-code>/LC_MESSAGES/django.po
\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po
/media/locales/<lang-code>/seafile-editor.json
For example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files:
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/django.po
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/djangojs.po
/seafile-server-latest/seahub/media/locales/ru/seafile-editor.json
If there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.)
2. Edit the files using an UTF-8 editor.
3. Save your changes.
4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the /seafile-server-latest/seahub/seahub/settings.py
file and save it.
LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n
5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in /seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES
:
msgfmt -o django.mo django.po
msgfmt -o djangojs.mo djangojs.po
Note: msgfmt is included in the gettext package.
Additionally, run the following two commands in the seafile-server-latest directory:
./seahub.sh python-env python3 seahub/manage.py compilejsi18n -l <lang-code>
./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process
6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file.
"},{"location":"develop/translation/#submit-your-translation","title":"Submit your translation","text":"Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/
Steps:
FileNotFoundError
occurred when executing the command manage.py collectstatic
.
FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n
Steps:
Modify STATICFILES_DIRS
in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py
manually
STATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n '%s/static' % PROJECT_ROOT,\n# '%s/frontend/build' % PROJECT_ROOT,\n)\n
Execute the command
./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process\n
Restore STATICFILES_DIRS
manually
```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, )
Restart Seahub
./seahub.sh restart\n
This issue has been fixed since version 11.0
"},{"location":"develop/web_api_v2.1/","title":"Web API","text":""},{"location":"develop/web_api_v2.1/#seafile-web-api","title":"Seafile Web API","text":"The API document can be accessed in the following location:
The Admin API document can be accessed in the following location:
The following setups are required for building and packaging Sync Client on Windows:
vcpkg
# Example of the install command:\n$ ./vcpkg.exe install curl[core,openssl]:x64-windows\n
Python 3.7
Certificates
Note: certificates for Windows application are issued by third-party certificate authority.
Support for Breakpad can be added by running following steps:
install gyp tool
$ git clone --depth=1 git@github.com:chromium/gyp.git\n$ python setup.py install\n
compile breakpad
$ git clone --depth=1 git@github.com:google/breakpad.git\n$ cd breakpad\n$ git clone https://github.com/google/googletest.git testing\n$ cd ..\n# create vs solution, this may throw an error \"module collections.abc has no attribute OrderedDict\", you should open the msvs.py and replace 'collections.abc' with 'collections'.\n$ gyp \u2013-no-circular-check breakpad\\src\\client\\windows\\breakpad_client.gyp\n
compile dump_syms tool
create vs solution
gyp \u2013-no-circular-check breakpad\\src\\tools\\windows\\tools_windows.gyp\n
copy VC merge modules
copy C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Redist\\MSVC\\v142\\MergeModules\\MergeModules\\Microsoft_VC142_CRT_x64.msm C:\\packagelib\n
Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\nseafile-workspace/seafile-shell-ext/\n
The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext.
"},{"location":"develop/windows/#building","title":"Building","text":"Note: the building commands have been included in the packaging script, you can skip building commands while packaging.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ devenv libsearpc.sln /build \"Release|x64\"\n
To build seafile
$ cd seafile-workspace/seafile/\n$ devenv seafile.sln /build \"Release|x64\"\n$ devenv msi/custom/seafile_custom.sln /build \"Release|x64\"\n
To build seafile-client
$ cd seafile-workspace/seafile-client/\n$ devenv third_party/quazip/quazip.sln /build \"Release|x64\"\n$ devenv seafile-client.sln /build \"Release|x64\"\n
To build seafile-shell-ext
$ cd seafile-workspace/seafile-shell-ext/\n$ devenv extensions/seafile_ext.sln /build \"Release|x64\"\n$ devenv seadrive-thumbnail-ext/seadrive_thumbnail_ext.sln /build \"Release|x64\"\n
"},{"location":"develop/windows/#packaging","title":"Packaging","text":"$ cd seafile-workspace/seafile-client/third_party/quazip\n$ devenv quazip.sln /build Release|x64\n$ cd seafile-workspace/seafile/scripts/build\n$ python build-msi-vs.py 1.0.0\n
If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows:
"},{"location":"extension/distributed_indexing/#install-redis-and-modify-configuration-files","title":"Install redis and modify configuration files","text":""},{"location":"extension/distributed_indexing/#1-install-redis-on-all-frontend-nodes","title":"1. Install redis on all frontend nodes","text":"Tip
If you use redis cloud service, skip this step and modify the configuration files directly
UbuntuCentOS$ apt install redis-server\n
$ yum install redis\n
"},{"location":"extension/distributed_indexing/#2-install-python-redis-third-party-package-on-all-frontend-nodes","title":"2. Install python redis third-party package on all frontend nodes","text":"$ pip install redis\n
"},{"location":"extension/distributed_indexing/#3-modify-the-seafeventsconf-on-all-frontend-nodes","title":"3. Modify the seafevents.conf
on all frontend nodes","text":"Add the following config items
[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
"},{"location":"extension/distributed_indexing/#4-modify-the-seafeventsconf-on-the-backend-node","title":"4. Modify the seafevents.conf
on the backend node","text":"Disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.
[INDEX FILES]\nenabled=true\n |\n V\nenabled=false \n
"},{"location":"extension/distributed_indexing/#5-restart-seafile","title":"5. Restart Seafile","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart && ./seahub.sh restart\n
cd /opt/seafile/seafile-server-latest\n./seafile.sh restart && ./seahub.sh restart\n
"},{"location":"extension/distributed_indexing/#deploy-distributed-indexing","title":"Deploy distributed indexing","text":"First, prepare a seafes master node and several seafes slave nodes, the number of slave nodes depends on your needs. Deploy Seafile on these nodes, and copy the configuration files in the conf
directory from the frontend nodes. The master node and slave nodes do not need to start Seafile, but need to read the configuration files to obtain the necessary information.
Next, create a configuration file index-master.conf
in the conf
directory of the master node, e.g.
[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Execute ./run_index_master.sh [start/stop/restart]
in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container) to control the program to start, stop and restart.
Next, create a configuration file index-slave.conf
in the conf
directory of all slave nodes, e.g.
[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Execute ./run_index_worker.sh [start/stop/restart]
in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container) to control the program to start, stop and restart.
Note
The index worker connects to backend storage directly. You don't need to run seaf-server in index worker node.
"},{"location":"extension/distributed_indexing/#some-commands-in-distributed-indexing","title":"Some commands in distributed indexing","text":"Rebuild search index, execute in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container):
$ ./pro/pro.py search --clear\n$ ./run_index_master.sh python-env index_op.py --mode resotre_all_repo\n
List the number of indexing tasks currently remaining, execute in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container):
$ ./run_index_master.sh python-env index_op.py --mode show_all_task\n
The above commands need to be run on the master node.
"},{"location":"extension/fuse/","title":"FUSE extension","text":"Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse
is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Note
Assume we want to mount to /opt/seafile-fuse
in host.
Add the following content
seafile:\n ...\n volumes:\n ...\n - /opt/seafile-fuse: /seafile-fuse\n privileged: true\n cap_add:\n - SYS_ADMIN\n
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script-in-docker","title":"Start seaf-fuse with the script in docker","text":"Start Seafile server and enter the container
docker compose up -d\n\ndocker exec -it seafile bash\n
Start seaf-fuse in the container
cd /opt/seafile/seafile-server-latest/\n\n./seaf-fuse.sh start /seafile-fuse\n
"},{"location":"extension/fuse/#use-seaf-fuse-in-binary-based-deployment","title":"Use seaf-fuse in binary based deployment","text":"Assume we want to mount to /data/seafile-fuse
.
mkdir -p /data/seafile-fuse\n
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"Before start seaf-fuse, you should have started seafile server with ./seafile.sh start
./seaf-fuse.sh start /data/seafile-fuse\n
seaf-fuse supports standard mount options for FUSE. For example, you can specify ownership for the mounted folder:
./seaf-fuse.sh start -o uid=<uid> /data/seafile-fuse\n
The fuse enables the block cache function by default to cache block objects, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options:
./seaf-fuse.sh start --disable-block-cache /data/seafile-fuse\n
You can find the complete list of supported options in man fuse
.
./seaf-fuse.sh stop\n
"},{"location":"extension/fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"extension/fuse/#the-top-level-folder","title":"The top level folder","text":"Now you can list the content of /data/seafile-fuse
.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n
$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"extension/fuse/#the-folder-for-a-library","title":"The folder for a library","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n
"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start
, most likely you are not in the \"fuse group\". You should:
Add yourself to the fuse group
sudo usermod -a -G fuse <your-user-name>\n
Logout your shell and login again
./seaf-fuse.sh start <path>
again.Deployment Tips
The steps from this guide only cover installing collabora as another container on the same docker host that your seafile docker container is on. Please make sure your host have sufficient cores and RAM.
If you want to install on another host please refer the collabora documentation for instructions. Then you should follow here to configure seahub_settings.py
to enable online office.
Note
To integrate LibreOffice with Seafile, you have to enable HTTPS in your Seafile server:
Deploy in DockerDeploy from binary packagesModify .env
file:
SEAFILE_SERVER_PROTOCOL=https\n
Please follow the links to enable https:
Download the collabora.yml
wget https://manual.seafile.com/12.0/docker/collabora.yml\n
Insert collabora.yml
to field COMPOSE_FILE
lists (i.e., COMPOSE_FILE='...,collabora.yml'
) and add the relative options in .env
COLLABORA_IMAGE=collabora/code:24.04.5.1.1 # image of LibreOffice\nCOLLABORA_PORT=6232 # expose port\nCOLLABORA_USERNAME=<your LibreOffice admin username>\nCOLLABORA_PASSWORD=<your LibreOffice admin password>\nCOLLABORA_ENABLE_ADMIN_CONSOLE=true # enable admin console or not\nCOLLABORA_REMOTE_FONT= # remote font url\nCOLLABORA_ENABLE_FILE_LOGGING=false # use file logs or not, see FQA\n
"},{"location":"extension/libreoffice_online/#config-seafile","title":"Config Seafile","text":"Add following config option to seahub_settings.py:
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'https://seafile.example.com:6232/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n
Then restart Seafile.
Click an office file in Seafile web interface, you will see the online preview rendered by CollaboraOnline. Here is an example:
"},{"location":"extension/libreoffice_online/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the integration work will help you debug the problem. When a user visits a file page:
CollaboraOnline container will output the logs in the stdout, you can use following command to access it
docker logs seafile-collabora\n
If you would like to use file to save log (i.e., a .log
file), you can modify .env
with following statment, and remove the notes in the collabora.yml
# .env\nCOLLABORA_ENABLE_FILE_LOGGING=True\nCOLLABORA_PATH=/opt/collabora # path of the collabora logs\n
# collabora.yml\n# remove the following notes\n...\nservices:\n collabora:\n ...\n volumes:\n - \"${COLLABORA_PATH:-/opt/collabora}/logs:/opt/cool/logs/\" # chmod 777 needed\n ...\n...\n
Create the logs directory, and restart Seafile server
mkdir -p /opt/collabora\nchmod 777 /opt/collabora\ndocker compose down\ndocker compose up -d\n
"},{"location":"extension/libreoffice_online/#collaboraonline-server-on-a-separate-host","title":"CollaboraOnline server on a separate host","text":"If your CollaboraOnline server on a separate host, you just need to modify the seahub_settings.py
similar to deploy on the same host. The only different is you have to change the field OFFICE_WEB_APP_BASE_URL
to your CollaboraOnline host (e.g., https://collabora-online.seafile.com/hosting/discovery
).
Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server.
When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed.
The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server.
"},{"location":"extension/notification-server/#supported-update-reminder-types","title":"Supported update reminder types","text":"Since Seafile 12.0, we use a separate Docker image to deploy the notification server. First download notification-server.yml
to Seafile directory:
wget https://manual.seafile.com/12.0/docker/notification-server.yml\n
Modify .env
, and insert notification-server.yml
into COMPOSE_FILE
:
COMPOSE_FILE='seafile-server.yml,caddy.yml,notification-server.yml'\n
And you need to add the following configurations under seafile.conf:
[notification]\nenabled = true\n# the ip of notification server. (default is `notification-server` in Docker)\nhost = notification-server\n# the port of notification server\nport = 8083\n
You can run notification server with the following command:
docker compose down\ndocker compose up -d\n
"},{"location":"extension/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"When the notification server is working, you can access http://127.0.0.1:8083/ping
from your browser, which will answer {\"ret\": \"pong\"}
. If you have a proxy configured, you can access https://{server}/notification/ping
from your browser instead.
If the client works with notification server, there should be a log message in seafile.log or seadrive.log.
Notification server is enabled on the remote server xxxx\n
"},{"location":"extension/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"There is no additional features for notification server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env
and notification-server.yml
to notification server directory:
wget https://manual.seafile.com/12.0/docker/notification-server/standalone/notification-server.yml\nwget -O .env https://manual.seafile.com/12.0/docker/notification-server/standalone/env\n
Then modify the .env
file according to your environment. The following fields are needed to be modified:
NOTIFICATION_SERVER_VOLUME
The volume directory of notification server data SEAFILE_MYSQL_DB_HOST
Seafile MySQL host SEAFILE_MYSQL_DB_USER
Seafile MySQL user, default is seafile
SEAFILE_MYSQL_DB_PASSWORD
Seafile MySQL password TIME_ZONE
Time zone JWT_PRIVATE_KEY
JWT key, the same as the config in Seafile .env
file SEAFILE_SERVER_HOSTNAME
Seafile host name SEAFILE_SERVER_PROTOCOL
http or https You can run notification server with the following command:
docker compose up -d\n
And you need to add the following configurations under seafile.conf
and restart Seafile server:
[notification]\nenabled = true\n# the ip of notification server.\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
You need to configure load balancer according to the following forwarding rules:
/notification/ping
requests to notification server via http protocol./notification
to notification server.Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\n
"},{"location":"extension/office_web_app/","title":"Office Online Server","text":"In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx.
Seafile only supports Office Online Server 2016 and above
To use Office Online Server for preview, please add following config option to seahub_settings.py.
# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n
Then restart
./seafile.sh restart\n./seahub.sh restart\n
After you click the document you specified in seahub_settings.py, you will see the new preview page.
"},{"location":"extension/office_web_app/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page:
Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong.
Warning
You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests.
"},{"location":"extension/only_office/","title":"OnlyOffice","text":"Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server.
Deployment Tips
You can deploy OnlyOffice to the same machine as Seafile (only support deploying with Docker with sufficient cores and RAM) using the onlyoffice.yml
provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
Download the onlyoffice.yml
wget https://manual.seafile.com/12.0/docker/onlyoffice.yml\n
insert onlyoffice.yml
into COMPOSE_FILE
list (i.e., COMPOSE_FILE='...,onlyoffice.yml'
), and add the following configurations of onlyoffice in .env
file.
# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\n
Note
From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable. Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command
pwgen -s 40 1\n
Also modify seahub_settings.py
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n
Tip
By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying ONLYOFFICE_PORT
, and port in the term ONLYOFFICE_APIJS_URL
in seahub_settings.py
has been modified together.
The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called local-production-linux.json
to force some settings.
nano local-production-linux.json\n
For example, you can configure OnlyOffice to automatically save by copying the following code block in this file:
{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 3\n }\n }\n}\n
Mount this config file into your onlyoffice block in onlyoffice.yml
:
service:\n ...\n onlyoffice:\n ...\n volumes:\n ...\n - <Your path to local-production-linux.json>:/etc/onlyoffice/documentserver/local-production-linux.json\n...\n
For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters
"},{"location":"extension/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"docker-compose down\ndocker-compose up -d\n
Success
After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: http{s}://{your Seafile server's domain or IP}:6233/welcome
, you will get Document Server is running info at this page.
Firstly, run docker logs -f seafile-onlyoffice
, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:
==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\n
If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905)
So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem.
"},{"location":"extension/only_office/#the-document-security-token-is-not-correctly-formed","title":"The document security token is not correctly formed","text":"Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server.
So, for security reason, please Configure OnlyOffice to use JWT Secret.
"},{"location":"extension/only_office/#onlyoffice-on-a-separate-host-and-url","title":"OnlyOffice on a separate host and URL","text":"In general, you only need to specify the values \u200b\u200bof the following fields in seahub_settings.py
and then restart the service.
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'http{s}://<Your OnlyOffice host url>/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n
"},{"location":"extension/only_office/#about-ssl","title":"About SSL","text":"For deployments using the onlyoffice.yml
file in this document, SSL is primarily handled by the Caddy
. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
SeaDoc is an extension of Seafile that providing an online collaborative document editor.
SeaDoc designed around the following key ideas:
SeaDoc excels at:
The SeaDoc archticture is demonstrated as below:
Here is the workflow when a user open sdoc file in browser
The easiest way to deployment SeaDoc is to deploy it with Seafile server on the same host using the same Docker network. If in some situations, you need to deployment SeaDoc standalone, you can follow the next section.
Download the seadoc.yml
to /opt/seafile
wget https://manual.seafile.com/12.0/docker/seadoc.yml\n
Modify .env
, and insert seadoc.yml
into COMPOSE_FILE
, and enable SeaDoc server
COMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nENABLE_SEADOC=true\nSEADOC_SERVER_URL=https://seafile.example.com/sdoc-server\n
Start SeaDoc server server with the following command
docker compose up -d\n
Now you can use SeaDoc!
"},{"location":"extension/setup_seadoc/#deploy-seadoc-standalone","title":"Deploy SeaDoc standalone","text":"If you deploy Seafile in a cluster or if you deploy Seafile with binary package, you need to setup SeaDoc as a standalone service. Here are the steps:
Download and modify the .env
and seadoc.yml
files to directory /opt/seadoc
wget https://manual.seafile.com/12.0/docker/seadoc/1.0/standalone/seadoc.yml\nwget -O .env https://manual.seafile.com/12.0/docker/seadoc/1.0/standalone/env\n
Then modify the .env
file according to your environment. The following fields are needed to be modified:
SEADOC_VOLUME
The volume directory of SeaDoc data SEAFILE_MYSQL_DB_HOST
Seafile MySQL host SEAFILE_MYSQL_DB_USER
Seafile MySQL user, default is seafile
SEAFILE_MYSQL_DB_PASSWORD
Seafile MySQL password TIME_ZONE
Time zone JWT_PRIVATE_KEY
JWT key, the same as the config in Seafile .env
file SEAFILE_SERVER_HOSTNAME
Seafile host name SEAFILE_SERVER_PROTOCOL
http or https (Optional) By default, SeaDoc server will bind to port 80 on the host machine. If the port is already taken by another service, you have to change the listening port of SeaDoc:
Modify seadoc.yml
services:\n seadoc:\n ...\n ports:\n - \"<your SeaDoc server port>:80\"\n...\n
Add a reverse proxy for SeaDoc server. In cluster environtment, it means you need to add reverse proxy rules at load balance. Here, we use Nginx as an example (please replace 127.0.0.1:80
to host:port
of your Seadoc server)
...\nserver {\n ...\n\n location /sdoc-server/ {\n proxy_pass http://127.0.0.1:80/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n client_max_body_size 100m;\n }\n\n location /socket.io {\n proxy_pass http://127.0.0.1:80;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n }\n}\n
Start SeaDoc server server with the following command
docker compose up -d\n
Modify Seafile server's configuration and start SeaDoc server
Warning
After using a reverse proxy, your SeaDoc service will be located at the /sdoc-server
path of your reverse proxy (i.e. xxx.example.com/sdoc-server
). For example:
Then SEADOC_SERVER_URL
will be
http{s}://xxx.example.com/sdoc-server\n
Modify .env
in your Seafile-server host:
ENABLE_SEADOC=true\nSEADOC_SERVER_URL=https://seafile.example.com/sdoc-server\n
Restart Seafile server
Deploy in Docker (including cluster mode)Deploy from binary packagesdocker compose down\ndocker compose up -d\n
cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n
/opt/seadoc-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
SeaDoc used one database table seahub_db.sdoc_operation_log
to store operation logs.
Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux.
To enable this feature, add the following options to seafile.conf
:
[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n
More details about the options:
An example for ClamAV (http://www.clamav.net/) is provided below:
[virus_scan]\nscan_command = clamscan\nvirus_code = 1\nnonvirus_code = 0\n
To test whether your configuration works, you can trigger a scan manually:
cd seafile-server-latest\n./pro/pro.py virus_scan\n
If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area.
Note
If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon
When run Clamav as a daemon, the scan_command
should be clamdscan
in seafile.conf
. An example for Clamav-daemon is provided below:
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\n
Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan.
[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n
The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored:
.bmp, .gif, .ico, .png, .jpg, .mp3, .mp4, .wav, .avi, .rmvb, .mkv\n
The list you provide will override default list.
"},{"location":"extension/virus_scan/#scanning-files-on-upload","title":"Scanning Files on Upload","text":"You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration.
You may scan files uploaded from shared upload links by adding the option below to seahub_settings.py
:
ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n
Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to seafile.conf
:
[fileserver]\ncheck_virus_on_web_upload = true\n
"},{"location":"extension/virus_scan_with_clamav/","title":"Deploy ClamAV with Seafile","text":""},{"location":"extension/virus_scan_with_clamav/#deploy-with-docker","title":"Deploy with Docker","text":"If your Seafile server is deployed using Docker, we also recommend that you use Docker to deploy ClamAV by following the steps below, otherwise you can deploy it from binary package of ClamAV.
"},{"location":"extension/virus_scan_with_clamav/#download-clamavyml-and-insert-to-docker-compose-lists-in-env","title":"Download clamav.yml and insert to Docker-compose lists in .env","text":"Download clamav.yml
wget https://manual.seafile.com/12.0/docker/pro/clamav.yml\n
Modify .env
, insert clamav.yml
to field COMPOSE_FILE
COMPOSE_FILE='seafile-server.yml,caddy.yml,clamav.yml'\n
"},{"location":"extension/virus_scan_with_clamav/#modify-seafileconf","title":"Modify seafile.conf","text":"Add the following statements to seafile.conf
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = 5\nscan_size_limit = 20\nthreads = 2\n
"},{"location":"extension/virus_scan_with_clamav/#restart-docker-container","title":"Restart docker container","text":"docker compose down\ndocker compose up -d \n
Wait some minutes until Clamav finished initializing.
Now Clamav can be used.
"},{"location":"extension/virus_scan_with_clamav/#use-clamav-in-binary-based-deployment","title":"Use ClamAV in binary based deployment","text":""},{"location":"extension/virus_scan_with_clamav/#install-clamav-daemon-clamav-freshclam","title":"Install clamav-daemon & clamav-freshclam","text":"apt-get install clamav-daemon clamav-freshclam\n
You should run Clamd with a root permission to scan any files. Edit the conf /etc/clamav/clamd.conf
,change the following line:
LocalSocketGroup root\nUser root\n
"},{"location":"extension/virus_scan_with_clamav/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"systemctl start clamav-daemon\n
Test the software
$ curl https://secure.eicar.org/eicar.com.txt | clamdscan -\n
The output must include:
stream: Eicar-Test-Signature FOUND\n
"},{"location":"extension/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"extension/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine.
If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers:
<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\n
"},{"location":"extension/virus_scan_with_kav4fs/#script","title":"Script","text":"As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result.
Save following contents to a file such as kav4fs_scan.sh
:
#!/bin/bash\n\nTEMP_LOG_FILE=`mktemp /tmp/XXXXXXXXXX`\nVIRUS_FOUND=1\nCLEAN=0\nUNDEFINED=2\nKAV4FS='/opt/kaspersky/kav4fs/bin/kav4fs-control'\nif [ ! -x $KAV4FS ]\nthen\n echo \"Binary not executable\"\n exit $UNDEFINED\nfi\n\nsudo $KAV4FS --scan-file \"$1\" > $TEMP_LOG_FILE\nif [ \"$?\" -ne 0 ]\nthen\n echo \"Error due to check file '$1'\"\n exit 3\nfi\nTHREATS_C=`grep 'Threats found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nRISKWARE_C=`grep 'Riskware found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nINFECTED=`grep 'Infected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSUSPICIOUS=`grep 'Suspicious:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSCAN_ERRORS_C=`grep 'Scan errors:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nPASSWORD_PROTECTED=`grep 'Password protected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nCORRUPTED=`grep 'Corrupted:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\n\nrm -f $TEMP_LOG_FILE\n\nif [ $THREATS_C -gt 0 -o $RISKWARE_C -gt 0 -o $INFECTED -gt 0 -o $SUSPICIOUS -gt 0 ]\nthen\n exit $VIRUS_FOUND\nelif [ $SCAN_ERRORS_C -gt 0 -o $PASSWORD_PROTECTED -gt 0 -o $CORRUPTED -gt 0 ]\nthen\n exit $UNDEFINED\nelse\n exit $CLEAN\nfi\n
Grant execute permissions for the script (make sure it is owned by the user Seafile is running as):
chmod u+x kav4fs_scan.sh\n
The meaning of the script return code:
1: found virus\n0: no virus\nother: scan failed\n
"},{"location":"extension/virus_scan_with_kav4fs/#configuration","title":"Configuration","text":"Add following content to seafile.conf
:
[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n
"},{"location":"extension/webdav/","title":"WebDAV extension","text":"In the document below, we assume your seafile installation folder is /opt/seafile
.
The configuration file is /opt/seafile-data/seafile/conf/seafdav.conf
(for deploying from binary packages, it should be /opt/seafile/conf/seafdav.conf
). If it is not created already, you can just create the file.
[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n
Every time the configuration is modified, you need to restart seafile server to make it take effect.
Deploy in DockerDeploy from binary packagesdocker compose restart\n
cd /opt/seafile/seafile-server-latest/\n./seafile.sh restart\n
Your WebDAV client would visit the Seafile WebDAV server at http{s}://example.com/seafdav/
(for deploying from binary packages, it should be http{s}://example.com:8080/seafdav/
)
In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV.
show_repo_id=true\n
"},{"location":"extension/webdav/#proxy-only-for-deploying-from-binary-packages","title":"Proxy (only for deploying from binary packages)","text":"Tip
For deploying in Docker, the WebDAV server has been proxied in /seafdav/*
, as you can skip this step
For Seafdav, the configuration of Nginx is as follows:
.....\n\n location /seafdav {\n rewrite ^/seafdav$ /seafdav/ permanent;\n }\n\n location /seafdav/ {\n proxy_pass http://127.0.0.1:8080/seafdav/;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\ufeff\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n\n location /:dir_browser {\n proxy_pass http://127.0.0.1:8080/:dir_browser;\n }\n
For Seafdav, the configuration of Apache is as follows:
......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive).
So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead.
WindowsLinuxMac OS XWindows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store.
On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line.
To use davfs2
sudo apt-get install davfs2\nsudo mount -t davfs -o uid=<username> https://example.com/seafdav /media/seafdav/\n
The -o option sets the owner of the mounted directory to so that it's writable for non-root users.
It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf
use_locks 0\n
Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck.
"},{"location":"extension/webdav/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"extension/webdav/#clients-cant-connect-to-seafdav-server","title":"Clients can't connect to seafdav server","text":"By default, seafdav is disabled. Check whether you have enabled = true
in seafdav.conf
. If not, modify it and restart seafile server.
If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of share_name
as the sample configuration above. Restart your seafile server and try again.
First, check the seafdav.log
to see if there is log like the following.
\"MOVE ... -> 502 Bad Gateway\n
If you have enabled debug, there will also be the following log.
09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\n
This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the HTTP_X_FORWARDED_PROTO
value in the request received by Seafile not being HTTPS.
You can solve this by manually changing the value of HTTP_X_FORWARDED_PROTO
. For example, in nginx, change
proxy_set_header X-Forwarded-Proto $scheme;\n
to
proxy_set_header X-Forwarded-Proto https;\n
"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder.
This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named FileSizeLimitInBytes
under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters
.
Seafile Server consists of the following two components:
seaf-server
)\uff1adata service daemon, handles raw file upload, download and synchronization. Seafile server by default listens on port 8082. You can configure Nginx/Apache to proxy traffic to the local 8082 port.The picture below shows how Seafile clients access files when you configure Seafile behind Nginx/Apache.
Tip
All access to the Seafile service (including Seahub and Seafile server) can be configured behind Nginx or Apache web server. This way all network traffic to the service can be encrypted with HTTPS.
"},{"location":"introduction/contribution/","title":"Contribution","text":""},{"location":"introduction/contribution/#licensing","title":"Licensing","text":"The different components of Seafile project are released under different licenses:
Forum: https://forum.seafile.com
Follow us @seafile https://twitter.com/seafile
"},{"location":"introduction/contribution/#report-a-bug","title":"Report a Bug","text":"Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write.
"},{"location":"introduction/file_permission_management/#read-only-syncing","title":"Read-only syncing","text":"Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications.
"},{"location":"introduction/file_permission_management/#cascading-permissionsub-folder-permissions-pro-edition","title":"Cascading permission/Sub-folder permissions (Pro edition)","text":"Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders.
Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups.
Note
Please check https://www.seafile.com/en/roadmap/
"},{"location":"outdate/change_default_java/","title":"Change default java","text":"When you have both Java 6 and Java 7 installed, the default Java may not be Java 7.
Do this by typing java -version
, and check the output.
If the default Java is Java 6, then do
On Debian/Ubuntu:
sudo update-alternatives --config java\n
On CentOS/RHEL:
sudo alternatives --config java\n
The above command will ask you to choose one of the installed Java versions as default. You should choose Java 7 here.
After that, re-run java -version
to make sure the change has taken effect.
Reference link
"},{"location":"outdate/kerberos_config/","title":"Kerberos config","text":""},{"location":"outdate/kerberos_config/#kerberos","title":"Kerberos","text":"NOTE: Since version 7.0, this documenation is deprecated. Users should use Apache as a proxy server for Kerberos authentication. Then configure Seahub by the instructions in Remote User Authentication.
Kerberos is a widely used single sign on (SSO) protocol. Seafile server supports authentication via Kerberos. It allows users to log in to Seafile without entering credentials again if they have a kerberos ticket.
In this documentation, we assume the reader is familiar with Kerberos installation and configuration.
Seahub provides a special URL to handle Kerberos login. The URL is https://your-server/krb5-login
. Only this URL needs to be configured under Kerberos protection. All other URLs don't go through the Kerberos module. The overall workflow for a user to login with Kerberos is as follows:
https://your-server/krb5-login
.The configuration includes three steps:
Store the keytab under the name defined below and make it accessible only to the apache user (e.g. httpd or www-data and chmod 600).
"},{"location":"outdate/kerberos_config/#apache-configuration","title":"Apache Configuration","text":"You should create a new location in your virtual host configuration for Kerberos.
<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n...\n <Location /krb5-login/>\n SSLRequireSSL\n AuthType Kerberos\n AuthName \"Kerberos EXAMPLE.ORG\"\n KrbMethodNegotiate On\n KrbMethodK5Passwd On\n Krb5KeyTab /etc/apache2/conf.d/http.keytab\n #ErrorDocument 401 '<html><meta http-equiv=\"refresh\" content=\"0; URL=/accounts/login\"><body>Kerberos authentication did not pass.</body></html>'\n Require valid-user\n </Location>\n...\n </VirtualHost>\n</IfModule>\n
After restarting Apache, you should see in the Apache logs that user@REALM is used when accessing https://seafile.example.com/krb5-login/.
"},{"location":"outdate/kerberos_config/#configure-seahub","title":"Configure Seahub","text":"Seahub extracts the username from the REMOTE_USER
environment variable.
Now we have to tell Seahub what to do with the authentication information passed in by Kerberos.
Add the following option to seahub_settings.py.
ENABLE_KRB5_LOGIN = True\n
"},{"location":"outdate/kerberos_config/#verify","title":"Verify","text":"After restarting Apache and Seafile services, you can test the Kerberos login workflow.
"},{"location":"outdate/outlook_addin_config/","title":"SSO for Seafile Outlook Add-in","text":"The Seafile Add-in for Outlook natively supports authentication via username and password. In order to authenticate with SSO, the add-in utilizes SSO support integrated in Seafile's webinterface Seahub.
Specifically, this is how SSO with the add-in works :
http(s)://SEAFILE_SERVER_URL/outlook/
http(s)://SEAFILE_SERVER_URL/accounts/login/
including a redirect request to /outlook/ following a successful authentication (e.g., https://demo.seafile.com/accounts/login/?next=/jwt-sso/?page=/outlook/
)This document explains how to configure Seafile and the reverse proxy and how to deploy the PHP script.
"},{"location":"outdate/outlook_addin_config/#requirements","title":"Requirements","text":"SSO authentication must be configured in Seafile.
Seafile Server must be version 8.0 or above.
"},{"location":"outdate/outlook_addin_config/#installing-prerequisites","title":"Installing prerequisites","text":"The packages php, composer, firebase-jwt, and guzzle must be installed. PHP can usually be downloaded and installed via the distribution's official repositories. firebase-jwt and guzzle are installed using composer.
First, install the php package and check the installed version:
# CentOS/RedHat\n$ sudo yum install -y php-fpm php-curl\n$ php --version\n\n# Debian/Ubuntu\n$ sudo apt install -y php-fpm php-curl\n$ php --version\n
Second, install composer. You find an up-to-date install manual at https://getcomposer.org/ for CentOS, Debian, and Ubuntu.
Third, use composer to install firebase-jwt and guzzle in a new directory in /var/www
:
$ mkdir -p /var/www/outlook-sso\n$ cd /var/www/outlook-sso\n$ composer require firebase/php-jwt guzzlehttp/guzzle\n
"},{"location":"outdate/outlook_addin_config/#configuring-seahub","title":"Configuring Seahub","text":"Add this block to the config file seahub_settings.py
using a text editor:
ENABLE_JWT_SSO = True\nJWT_SSO_SECRET_KEY = 'SHARED_SECRET'\nENABLE_SYS_ADMIN_GENERATE_USER_AUTH_TOKEN = True\n
Replace SHARED_SECRET with a secret of your own.
"},{"location":"outdate/outlook_addin_config/#configuring-the-proxy-server","title":"Configuring the proxy server","text":"The configuration depends on the proxy server use.
If you use nginx, add the following location block to the nginx configuration:
location /outlook {\n alias /var/www/outlook-sso/public;\n index index.php;\n location ~ \\.php$ {\n fastcgi_split_path_info ^(.+\\.php)(/.+)$;\n fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;\n fastcgi_param SCRIPT_FILENAME $request_filename;\n fastcgi_index index.php;\n include fastcgi_params;\n }\n}\n
This sample block assumes that PHP 7.4 is installed. If you have a different PHP version on your system, modify the version in the fastcgi_pass unix.
Note: The alias path can be altered. We advise against it unless there are good reasons. If you do, make sure you modify the path accordingly in all subsequent steps.
Finally, check the nginx configuration and restart nginx:
$ nginx -t\n$ nginx -s reload\n
"},{"location":"outdate/outlook_addin_config/#deploying-the-php-script","title":"Deploying the PHP script","text":"The PHP script and corresponding configuration files will be saved in the new directory created earlier. Change into it and add a PHP config file:
$ cd /var/www/outlook-sso\n$ nano config.php\n
Paste the following content in the config.php
:
<?php\n\n# general settings\n$seafile_url = 'SEAFILE_SERVER_URL';\n$jwt_shared_secret = 'SHARED_SECRET';\n\n# Option 1: provide credentials of a seafile admin user\n$seafile_admin_account = [\n 'username' => '',\n 'password' => '',\n];\n\n# Option 2: provide the api-token of a seafile admin user\n$seafile_admin_token = '';\n\n?>\n
First, replace SEAFILE_SERVER_URL with the URL of your Seafile Server and SHARED_SECRET with the key used in Configuring Seahub.
Second, add either the user credentials of a Seafile user with admin rights or the API-token of such a user.
In the next step, create the index.php
and copy & paste the PHP script:
mkdir /var/www/outlook-sso/public\n$ cd /var/www/outlook-sso/public\n$ nano index.php\n
Paste the following code block:
<?php\n/** IMPORTANT: there is no need to change anything in this file ! **/\n\nrequire_once __DIR__ . '/../vendor/autoload.php';\nrequire_once __DIR__ . '/../config.php';\n\nif(!empty($_GET['jwt-token'])){\n try {\n $decoded = Firebase\\JWT\\JWT::decode($_GET['jwt-token'], new Firebase\\JWT\\Key($jwt_shared_secret, 'HS256'));\n }\n catch (Exception $e){\n echo json_encode([\"error\" => \"wrong JWT-Token\"]);\n die();\n }\n\n try {\n // init connetion to seafile api\n $client = new GuzzleHttp\\Client(['base_uri' => $seafile_url]);\n\n // get admin api-token with his credentials (if not set)\n if(empty($seafile_admin_token)){\n $request = $client->request('POST', '/api2/auth-token/', ['form_params' => $seafile_admin_account]);\n $response = json_decode($request->getBody());\n $seafile_admin_token = $response->token;\n }\n\n // get api-token of the user\n $request = $client->request('POST', '/api/v2.1/admin/generate-user-auth-token/', [\n 'json' => ['email' => $decoded->email],\n 'headers' => ['Authorization' => 'Token '. $seafile_admin_token]\n ]);\n $response = json_decode($request->getBody());\n\n // create the output for the outlook plugin (json like response)\n echo json_encode([\n 'exp' => $decoded->exp,\n 'email' => $decoded->email,\n 'name' => $decoded->name,\n 'token' => $response->token,\n ]);\n } catch (GuzzleHttp\\Exception\\ClientException $e){\n echo $e->getResponse()->getBody();\n }\n}\nelse{ // no jwt-token. therefore redirect to the login page of seafile\n header(\"Location: \". $seafile_url .\"/accounts/login/?next=/jwt-sso/?page=/outlook\");\n} ?>\n
Note: Contrary to the config.php, no replacements or modifications are necessary in this file.
The directory layout in /var/www/sso-outlook/
should now look as follows:
$ tree -L 2 /var/www/outlook-sso\n/var/www/outlook-sso/\n\u251c\u2500\u2500 composer.json\n\u251c\u2500\u2500 composer.lock\n\u251c\u2500\u2500 config.php\n\u251c\u2500\u2500 public\n| \u2514\u2500\u2500 index.php\n\u2514\u2500\u2500 vendor\n \u251c\u2500\u2500 autoload.php\n \u251c\u2500\u2500 composer\n \u2514\u2500\u2500 firebase\n
Seafile and Seahub are now configured to support SSO in the Seafile Add-in for Outlook.
"},{"location":"outdate/outlook_addin_config/#testing","title":"Testing","text":"You can now test SSO authentication in the add-in. Hit the SSO button in the settings of the Seafile add-in.
"},{"location":"outdate/seaf_encrypt/","title":"Seafile Storage Encryption Backend","text":"This feature is deprecated. We recommend you to use the encryption feature provided the storage system.
Since Seafile Professional Server 5.1.3, we support storage enryption backend functionality. When enabled, all seafile objects (commit, fs, block) will be encrypted with AES 256 CBC algorithm, before writing them to the storage backend. Currently supported backends are: file system, Ceph, Swift and S3.
Note that all objects will be encrypted with the same global key/iv pair. The key/iv pair has to be generated by the system admin and stored safely. If the key/iv pair is lost, all data cannot be recovered.
"},{"location":"outdate/seaf_encrypt/#configure-storage-backend-encryption","title":"Configure Storage Backend Encryption","text":""},{"location":"outdate/seaf_encrypt/#generate-key-and-iv","title":"Generate Key and IV","text":"Go to /seafile-server-latest, execute ./seaf-gen-key.sh -h
. it will print the following usage information:
usage :\nseaf-gen-key.sh\n -p <file path to write key iv, default ./seaf-key.txt>\n
By default, the key/iv pair will be saved to a file named seaf-key.txt in the current directory. You can use '-p' option to change the path.
"},{"location":"outdate/seaf_encrypt/#configure-a-freshly-installed-seafile-server","title":"Configure a freshly installed Seafile Server","text":"Add the following configuration to seafile.conf:
[store_crypt]\nkey_path = <the key file path generated in previous section>\n
Now the encryption feature should be working.
"},{"location":"outdate/seaf_encrypt/#migrating-existing-seafile-server","title":"Migrating Existing Seafile Server","text":"If you have existing data in the Seafile server, you have to migrate/encrypt the existing data. You must stop Seafile server before migrating the data.
"},{"location":"outdate/seaf_encrypt/#create-directories-for-encrypted-data","title":"Create Directories for Encrypted Data","text":"Create new configuration and data directories for the encrypted data.
cd seafile-server-latest\ncp -r conf conf-enc\nmkdir seafile-data-enc\ncp -r seafile-data/library-template seafile-data-enc\n# If you use SQLite database\ncp seafile-data/seafile.db seafile-data-enc/\n
"},{"location":"outdate/seaf_encrypt/#edit-config-files","title":"Edit Config Files","text":"If you configured S3/Swift/Ceph backend, edit /conf-enc/seafile.conf. You must use a different bucket/container/pool to store the encrypted data.
Then add the following configuration to /conf-enc/seafile.conf
[store_crypt]\nkey_path = <the key file path generated in previous section>\n
"},{"location":"outdate/seaf_encrypt/#migrate-the-data","title":"Migrate the Data","text":"Go to /seafile-server-latest, use the seaf-encrypt.sh script to migrate the data.
Run ./seaf-encrypt.sh -f ../conf-enc -e ../seafile-data-enc
,
Starting seaf-encrypt, please wait ...\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 57 block among 12 repo.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 102 fs among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all fs.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 66 commit among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all commit.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all block.\nseaf-encrypt run done\nDone.\n
If there are error messages after executing seaf-encrypt.sh, you can fix the problem and run the script again. Objects that have already been migrated will not be copied again.
"},{"location":"outdate/seaf_encrypt/#clean-up","title":"Clean Up","text":"Go to , execute following commands:
mv conf conf-bak\nmv seafile-data seafile-data-bak\nmv conf-enc conf\nmv seafile-data-enc seafile-data\n
Restart Seafile Server. If everything works okay, you can remove the backup directories.
"},{"location":"outdate/terms_and_conditions/","title":"Terms and Conditions","text":"Starting from version 6.0, system admin can add T&C at admin panel, all users need to accept that before using the site.
In order to use this feature, please add following line to seahub_settings.py
,
ENABLE_TERMS_AND_CONDITIONS = True\n
After restarting, there will be \"Terms and Conditions\" section at sidebar of admin panel.
"},{"location":"outdate/using_fuse/","title":"Seafile","text":""},{"location":"outdate/using_fuse/#using-fuse","title":"Using Fuse","text":"Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse
is an implementation of the [http://fuse.sourceforge.net FUSE] virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Seaf-fuse is added since Seafile Server '''2.1.0'''.
'''Note:''' * Encrypted folders can't be accessed by seaf-fuse. * Currently the implementation is '''read-only''', which means you can't modify the files through the mounted folder. * One debian/centos systems, you need to be in the \"fuse\" group to have the permission to mount a FUSE folder.
"},{"location":"outdate/using_fuse/#how-to-start-seaf-fuse","title":"How to start seaf-fuse","text":"Assume we want to mount to /data/seafile-fuse
.
mkdir -p /data/seafile-fuse\n
"},{"location":"outdate/using_fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"'''Note:''' Before start seaf-fuse, you should have started seafile server with ./seafile.sh start
.
./seaf-fuse.sh start /data/seafile-fuse\n
"},{"location":"outdate/using_fuse/#stop-seaf-fuse","title":"Stop seaf-fuse","text":"./seaf-fuse.sh stop\n
"},{"location":"outdate/using_fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"outdate/using_fuse/#the-top-level-folder","title":"The top level folder","text":"Now you can list the content of /data/seafile-fuse
.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 test@test.com/\n
$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"outdate/using_fuse/#the-folder-for-a-library","title":"The folder for a library","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 1970 image.png\n-rw-r--r-- 1 root root 501K Jan 1 1970 sample.jpng\n
"},{"location":"outdate/using_fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start
, most likely you are not in the \"fuse group\". You should:
sudo usermod -a -G fuse <your-user-name>\n
./seaf-fuse.sh start <path>
again.You need to install ffmpeg package to let the video thumbnail work correctly:
Ubuntu 16.04
# Install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\n
Centos 7
# We need to activate the epel repos\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\n\n# Then update the repo and install ffmpeg\nyum -y install ffmpeg ffmpeg-devel\n\n# Now we need to install some modules\npip install pillow moviepy\n
Debian Jessie
# Add backports repo to /etc/apt/sources.list.d/\n# e.g. the following repo works (June 2017)\nsudo echo \"deb http://httpredir.debian.org/debian $(lsb_release -cs)-backports main non-free\" > /etc/apt/sources.list.d/debian-backports.list\n\n# Then update the repo and install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\n
"},{"location":"outdate/video_thumbnails/#configure-seafile-to-create-thumbnails","title":"Configure Seafile to create thumbnails","text":"Now configure accordingly in seahub_settings.py
# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first. \n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails/\n# NOTE: since version 6.1\nENABLE_VIDEO_THUMBNAIL = True\n\n# Use the frame at 5 second as thumbnail\nTHUMBNAIL_VIDEO_FRAME_TIME = 5 \n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n
"},{"location":"setup/caddy/","title":"HTTPS and Caddy","text":"Note
From Seafile Docker 12.0, HTTPS will be handled by the Caddy. The default caddy image used of Seafile docker is lucaslorentz/caddy-docker-proxy:2.9
.
Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. In addition to the advantages of traditional proxy components (e.g., nginx), Caddy also makes it easier for users to complete the acquisition and update of HTTPS certificates by providing simpler configurations.
To engage HTTPS, users only needs to correctly configure the following fields in .env
:
SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\n
"},{"location":"setup/cluster_deploy_with_docker/","title":"Seafile Docker Cluster Deployment","text":"Seafile Docker cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the Load Balancer Setting for details.
"},{"location":"setup/cluster_deploy_with_docker/#environment","title":"Environment","text":"System: Ubuntu 24.04
Seafile Server: 2 frontend nodes, 1 backend node
We assume you have already deployed memcache, MariaDB, ElasticSearch in separate machines and use S3 like object storage.
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"setup/cluster_deploy_with_docker/#deploy-the-first-seafile-frontend-node","title":"Deploy the first Seafile frontend node","text":"Create the mount directory
mkdir -p /opt/seafile/shared\n
Pulling Seafile image
docker pull seafileltd/seafile-pro-mc:12.0-latest\n
Note
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download.
Download the seafile-server.yml
and .env
wget -O .env https://manual.seafile.com/12.0/docker/cluster/env\nwget https://manual.seafile.com/12.0/docker/cluster/seafile-server.yml\n
Modify the variables in .env
(especially the terms like <...>
).
Tip
If you have already deployed S3 storage backend and plan to apply it to Seafile cluster, you can modify the variables in .env
to set them synchronously during initialization.
Start the Seafile docker
$ cd /opt/seafile\n$ docker compose up -d\n
Cluster init mode
Because CLUSTER_INIT_MODE is true in the .env
file, Seafile docker will be started in init mode and generate configuration files. As the results, you can see the following lines if you trace the Seafile container (i.e., docker logs seafile
):
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n
In initialization mode, the service will not be started. During this time you can check the generated configuration files (e.g., MySQL, Memcached, Elasticsearch) in configuration files:
After initailizing the cluster, the following fields can be removed in .env
CLUSTER_INIT_MODE
, must be removed from .env fileCLUSTER_INIT_MEMCACHED_HOST
CLUSTER_INIT_ES_HOST
CLUSTER_INIT_ES_PORT
INIT_S3_STORAGE_BACKEND_CONFIG
INIT_S3_COMMIT_BUCKET
INIT_S3_FS_BUCKET
INIT_S3_BLOCK_BUCKET
INIT_S3_KEY_ID
INIT_S3_SECRET_KEY
Tip
We recommend that you check that the relevant configuration files are correct and copy the SEAFILE_VOLUME
directory before the service is officially started, because only the configuration files are generated after initialization. You can directly migrate the entire copied SEAFILE_VOLUME
to other nodes later:
cp -r /opt/seafile/shared /opt/seafile/shared-bak\n
Restart the container to start the service in frontend node
docker compose down\ndocker compose up -d\n
Frontend node starts successfully
After executing the above command, you can trace the logs of container seafile
(i.e., docker logs seafile
). You can see the following message if the frontend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
You can directly copy all the directories generated by the first frontend node, including the Docker-compose files (e.g., seafile-server.yml
, .env
) and modified configuration files, and then start the seafile docker container:
docker compose down\ndocker compose up -d\n
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-backend-node","title":"Deploy seafile backend node","text":"Create the mount directory
$ mkdir -p /opt/seafile/shared\n
Pulling Seafile image
Copy seafile-server.yml
, .env
and configuration files from frontend node
Note
The configuration files from frontend node have to be put in the same path as the frontend node, i.e., /opt/seafile/shared/seafile/conf/*
Modify .env
, set CLUSTER_MODE
to backend
Start the service in the backend node
docker compose up -d\n
Backend node starts successfully
After executing the above command, you can trace the logs of container seafile
(i.e., docker logs seafile
). You can see the following message if the backend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:11:59 Nginx ready \n2024-11-21 03:11:59 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nSeafile server started\n\nDone.\n\nStarting seafile background tasks ...\nDone.\n
Execute the following commands on the two Seafile frontend servers:
$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n
Warning
Please correctly modify the IP address (Front-End01-IP
and Front-End02-IP
) of the frontend server in the above configuration file. Other wise it cannot work properly.
Choose one of the above two servers as the master node, and the other as the slave node.
Perform the following operations on the master node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Warning
Please correctly configure the virtual IP address and network interface device name in the above file. Other wise it cannot work properly.
Perform the following operations on the standby node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Finally, run the following commands on the two Seafile frontend servers to start the corresponding services:
$ systemctl enable --now haproxy\n$ systemctl enable --now keepalived\n
So far, Seafile cluster has been deployed.
"},{"location":"setup/cluster_deploy_with_docker/#optional-deploy-seadoc-server","title":"(Optional) Deploy SeaDoc server","text":"You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL
in your .env
file
This manual explains how to deploy and run Seafile Server on a Linux server using Kubernetes (k8s thereafter).
"},{"location":"setup/cluster_deploy_with_k8s/#gettings-started","title":"Gettings started","text":"The two volumes for persisting data, /opt/seafile-data
and /opt/seafile-mysql
, are still adopted in this manual. What's more, all k8s YAML files will be placed in /opt/seafile-k8s-yaml
. It is not recommended to change these paths. If you do, account for it when following these instructions.
The two tools, kubectl and a k8s control plane tool (i.e., kubeadm), are required and can be installed with official installation guide.
Multi-node deployment
If it is a multi-node deployment, k8s control plane needs to be installed on each node. After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster. Since this manual still uses the same image as docker deployment, we need to add the following repository to k8s:
kubectl create secret docker-registry regcred --docker-server=seafileltd --docker-username=seafile --docker-password=zjkmid6rQibdZ=uJMuWS\n
"},{"location":"setup/cluster_deploy_with_k8s/#yaml","title":"YAML","text":"Seafile mainly involves three different services, namely database service, cache service and seafile service. Since these three services do not have a direct dependency relationship, we need to separate them from the entire docker-compose.yml (in this manual, we use Seafile 12 PRO) and divide them into three pods. For each pod, we need to define a series of YAML files for k8s to read, and we will store these YAMLs in /opt/seafile-k8s-yaml
.
Note
This series of YAML mainly includes Deployment for pod management and creation, Service for exposing services to the external network, PersistentVolume for defining the location of a volume used for persistent storage on the host and Persistentvolumeclaim for declaring the use of persistent storage in the container. For futher configuration details, you can refer the official documents.
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb","title":"mariadb","text":""},{"location":"setup/cluster_deploy_with_k8s/#mariadb-deploymentyaml","title":"mariadb-deployment.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mariadb\nspec:\n selector:\n matchLabels:\n app: mariadb\n replicas: 1\n template:\n metadata:\n labels:\n app: mariadb\n spec:\n containers:\n - name: mariadb\n image: mariadb:10.11\n env:\n - name: MARIADB_ROOT_PASSWORD\n value: \"db_password\"\n - name: MARIADB_AUTO_UPGRADE\n value: \"true\"\n ports:\n - containerPort: 3306\n volumeMounts:\n - name: mariadb-data\n mountPath: /var/lib/mysql\n volumes:\n - name: mariadb-data\n persistentVolumeClaim:\n claimName: mariadb-data\n
Please replease MARIADB_ROOT_PASSWORD
to your own mariadb password.
Tip
In the above Deployment configuration file, no restart policy for the pod is specified. The default restart policy is Always. If you need to modify it, add the following to the spec attribute:
restartPolicy: OnFailure\n\n#Note:\n# Always: always restart (include normal exit)\n# OnFailure: restart only with unexpected exit\n# Never: do not restart\n
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb-serviceyaml","title":"mariadb-service.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: mariadb\nspec:\n selector:\n app: mariadb\n ports:\n - protocol: TCP\n port: 3306\n targetPort: 3306\n
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb-persistentvolumeyaml","title":"mariadb-persistentvolume.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mariadb-data\nspec:\n capacity:\n storage: 1Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-mysql/db\n
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb-persistentvolumeclaimyaml","title":"mariadb-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mariadb-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"setup/cluster_deploy_with_k8s/#memcached","title":"memcached","text":""},{"location":"setup/cluster_deploy_with_k8s/#memcached-deploymentyaml","title":"memcached-deployment.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: memcached\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: memcached\n template:\n metadata:\n labels:\n app: memcached\n spec:\n containers:\n - name: memcached\n image: memcached:1.6.18\n args: [\"-m\", \"256\"]\n ports:\n - containerPort: 11211\n
"},{"location":"setup/cluster_deploy_with_k8s/#memcached-serviceyaml","title":"memcached-service.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: memcached\nspec:\n selector:\n app: memcached\n ports:\n - protocol: TCP\n port: 11211\n targetPort: 11211\n
"},{"location":"setup/cluster_deploy_with_k8s/#seafile","title":"Seafile","text":""},{"location":"setup/cluster_deploy_with_k8s/#seafile-deploymentyaml","title":"seafile-deployment.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: seafile\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: seafile\n template:\n metadata:\n labels:\n app: seafile\n spec:\n containers:\n - name: seafile\n # image: seafileltd/seafile-mc:9.0.10\n # image: seafileltd/seafile-mc:11.0-latest\n image: seafileltd/seafile-pro-mc:12.0-latest\n env:\n - name: DB_HOST\n value: \"mariadb\"\n - name: DB_ROOT_PASSWD\n value: \"db_password\" #db's password\n - name: TIME_ZONE\n value: \"Europe/Berlin\"\n - name: INIT_SEAFILE_ADMIN_EMAIL\n value: \"admin@seafile.com\" #admin email\n - name: INIT_SEAFILE_ADMIN_PASSWORD\n value: \"admin_password\" #admin password\n - name: SEAFILE_SERVER_LETSENCRYPT\n value: \"false\"\n - name: SEAFILE_SERVER_HOSTNAME\n value: \"you_seafile_domain\" #hostname\n ports:\n - containerPort: 80\n # - containerPort: 443\n # name: seafile-secure\n volumeMounts:\n - name: seafile-data\n mountPath: /shared\n volumes:\n - name: seafile-data\n persistentVolumeClaim:\n claimName: seafile-data\n restartPolicy: Always\n # to get image from protected repository\n imagePullSecrets:\n - name: regcred\n
Please replease the above configurations, such as database root password, admin in seafile."},{"location":"setup/cluster_deploy_with_k8s/#seafile-serviceyaml","title":"seafile-service.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: seafile\nspec:\n selector:\n app: seafile\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80\n nodePort: 30000\n
"},{"location":"setup/cluster_deploy_with_k8s/#seafile-persistentvolumeyaml","title":"seafile-persistentvolume.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: seafile-data\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-data\n
"},{"location":"setup/cluster_deploy_with_k8s/#seafile-persistentvolumeclaimyaml","title":"seafile-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: seafile-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"setup/cluster_deploy_with_k8s/#deploy-pods","title":"Deploy pods","text":"You can use following command to deploy pods:
kubectl apply -f /opt/seafile-k8s-yaml/\n
"},{"location":"setup/cluster_deploy_with_k8s/#container-management","title":"Container management","text":"Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with seafile-
as the prefix (such as seafile-748b695648-d6l4g)
kubectl get pods\n
You can check a status of a pod by
kubectl logs seafile-748b695648-d6l4g\n
and enter a container by
kubectl exec -it seafile-748b695648-d6l4g -- bash\n
If you modify some configurations in /opt/seafile-data/conf
and need to restart the container, the following command can be refered:
kubectl delete deployments --all\nkubectl apply -f /opt/seafile-k8s-yaml/\n
"},{"location":"setup/migrate_backends_data/","title":"Migrate data between different backends","text":"Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss.
Data migration takes 3 steps:
We need to add new backend configurations to this file (including [block_backend]
, [commit_object_backend]
, [fs_object_backend]
options) and save it under a readable path. Let's assume that we are migrating data to S3 and create temporary seafile.conf under /opt
cat > seafile.conf << EOF\n[commit_object_backend]\nname = s3\nbucket = seacomm\nkey_id = ******\nkey = ******\n\n[fs_object_backend]\nname = s3\nbucket = seafs\nkey_id = ******\nkey = ******\n\n[block_backend]\nname = s3\nbucket = seablk\nkey_id = ******\nkey = ******\nEOF\n\nmv seafile.conf /opt\n
If you want to migrate to a local file system, the seafile.conf temporary configuration example is as follows:
cat > seafile.conf << EOF\n[commit_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[fs_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[block_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\nEOF\n\nmv seafile.conf /opt\n
Repalce the configurations with your own choice.
"},{"location":"setup/migrate_backends_data/#migrating-to-sse-c-encrypted-s3-storage","title":"Migrating to SSE-C Encrypted S3 Storage","text":"If you are migrating to S3 storage, and want your data to be encrypted at rest, you can configure SSE-C encryption options in the temporary seafile.conf. Note that you have to use Seafile Pro 11 or newer and make sure your S3 storage supports SSE-C.
cat > seafile.conf << EOF\n[commit_object_backend]\nname = s3\nbucket = seacomm\nkey_id = ******\nkey = ******\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\nbucket = seafs\nkey_id = ******\nkey = ******\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\nbucket = seablk\nkey_id = ******\nkey = ******\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\nEOF\n\nmv seafile.conf /opt\n
sse_c_key
is a string of 32 characters.
You can generate sse_c_key
with the following command\uff1a
openssl rand -base64 24\n
"},{"location":"setup/migrate_backends_data/#migrating-large-number-of-objects","title":"Migrating large number of objects","text":"If you have millions of objects in the storage (especially fs objects), it may take quite long time to migrate all objects. More than half of the time is spent on checking whether an object exists in the destination storage. Since Pro edition 7.0.8, a feature is added to speed-up the checking.
Before running the migration script, please set this env variable:
export OBJECT_LIST_FILE_PATH=/path/to/object/list/file\n
3 files will be created: /path/to/object/list/file.commit
,/path/to/object/list/file.fs
, /path/to/object/list/file.blocks
.
When you run the script for the first time, the object list file will be filled with existing objects in the destination. Then, when you run the script for the second time, it will load the existing object list from the file, instead of querying the destination. And newly migrated objects will also be added to the file. During migration, the migration process checks whether an object exists by checking the pre-loaded object list, instead of asking the destination, which will greatly speed-up the migration process.
It's suggested that you don't interrupt the script during the \"fetch object list\" stage when you run it for the first time. Otherwise the object list in the file will be incomplete.
Another trick to speed-up the migration is to increase the number of worker threads and size of task queue in the migration script. You can modify the nworker
and maxsize
variables in the following code:
class ThreadPool(object):\n\ndef __init__(self, do_work, nworker=20):\n self.do_work = do_work\n self.nworker = nworker\n self.task_queue = Queue.Queue(maxsize = 2000)\n
The number of workers can be set to relatively large values, since they're mostly waiting for I/O operations to finished.
"},{"location":"setup/migrate_backends_data/#decrypting-encrypted-storage-backend","title":"Decrypting encrypted storage backend","text":"If you have an encrypted storage backend (a deprecated feature no long supported now), you can use this script to migrate and decrypt the data from that backend to a new one. You can add the --decrypt
option, which will decrypt the data while reading it, and then write the unencrypted data to the new backend. Note that you need add this option in all stages of the migration.
cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt --decrypt\n
"},{"location":"setup/migrate_backends_data/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step.
We assume you have installed seafile pro server under ~/haiwen
, enter ~/haiwen/seafile-server-latest
and run migrate.sh with parent path of temporary seafile.conf as parameter, here is /opt
.
cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\n
Tip
This script is completely reentrant. So you can stop and restart it, or run it many times. It will check whether an object exists in the destination before sending it.
"},{"location":"setup/migrate_backends_data/#run-final-migration","title":"Run final migration","text":"New objects added during the last migration step will be migrated in this step. To prevent new objects being added, you have to stop Seafile service during the final migration operation. This usually take short time. If you have large number of objects, please following the optimization instruction in previous section.
You just have to stop Seafile and Seahub service, then run the migration script again.
cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\n
"},{"location":"setup/migrate_backends_data/#replace-the-original-seafileconf","title":"Replace the original seafile.conf","text":"After running the script, we need replace the original seafile.conf with new one:
mv /opt/seafile.conf ~/haiwen/conf\n
now we only have configurations about backend, more config options, e.g. memcache and quota, can then be copied from the original seafile.conf file.
After replacing seafile.conf, you can restart seafile server and access the data on the new backend.
"},{"location":"setup/migrate_ce_to_pro_with_docker/","title":"Migrate CE to Pro with Docker","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#preparation","title":"Preparation","text":".env
and seafile-server.yml
of Seafile Pro.wget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\n
"},{"location":"setup/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"docker compose down\n
Tip
To ensure data security, it is recommended that you back up your MySQL data
"},{"location":"setup/migrate_ce_to_pro_with_docker/#put-your-licence-file","title":"Put your licence file","text":"Copy the seafile-license.txt
to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data
, so you should put it in the /opt/seafile-data/seafile/
.
Modify .env
based on the old configurations from the old .env
file. The following fields should be paid special attention and others should be the same as the old configurations:
SEAFILE_IMAGE
The Seafile pro docker image, which the tag must be equal to or newer than the old Seafile CE docker tag seafileltd/seafile-pro-mc:12.0-latest
SEAFILE_ELASTICSEARCH_VOLUME
The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
For other fileds (e.g., SEAFILE_VOLUME
, SEAFILE_MYSQL_VOLUME
, SEAFILE_MYSQL_DB_USER
, SEAFILE_MYSQL_DB_PASSWORD
), must be consistent with the old configurations.
Tip
For the configurations using to do the initializations (e.g, INIT_SEAFILE_ADMIN_EMAIL
, INIT_SEAFILE_MYSQL_ROOT_PASSWORD
, INIT_S3_STORAGE_BACKEND_CONFIG
), you can remove it from .env
as well
Replace the old seafile-server.yml
and .env
by the new and modified files, i.e. (if your old seafile-server.yml
and .env
are in the /opt
)
mv -b seafile-server.yml /opt/seafile-server.yml\nmv -b .env /opt/.env\n
"},{"location":"setup/migrate_ce_to_pro_with_docker/#do-the-migration","title":"Do the migration","text":"The Seafile Pro container needs to be running during the migration process, which means that end users may access the Seafile service during this process. In order to avoid the data confusion caused by this, it is recommended that you take the necessary measures to temporarily prohibit users from accessing the Seafile service. For example, modify the firewall policy.
Run the following command to run the Seafile-Pro container\uff1a
docker compose up -d\n
Then run the migration script by executing the following command:
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py setup --migrate\n
After the migration script runs successfully, modify es_host, es_port
in /opt/seafile-data/seafile/conf/seafevents.conf
manually.
[INDEX FILES]\nes_host = elasticsearch\nes_port = 9200\nenabled = true\ninterval = 10m\n
Restart the Seafile Pro container.
docker restart seafile\n
Now you have a Seafile Professional service.
"},{"location":"setup/migrate_non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"The recommended steps to migrate from non-docker deployment to docker deployment are:
The following document assumes that the deployment path of your non-Docker version of Seafile is /opt/seafile. If you use other paths, before running the command, be careful to modify the command path.
Note
You can also refer to the Seafile backup and recovery documentation, deploy Seafile Docker on another machine, and then copy the old configuration information, database, and seafile-data to the new machine to complete the migration. The advantage of this is that even if an error occurs during the migration process, the existing system will not be destroyed.
"},{"location":"setup/migrate_non_docker_to_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_non_docker_to_docker/#stop-seafile-nginx","title":"Stop Seafile, Nginx","text":"Stop the locally deployed Seafile, Nginx, Memcache
systemctl stop nginx && systemctl disable nginx\nsystemctl stop memcached && systemctl disable memcached\n./seafile.sh stop && ./seahub.sh stop\n
"},{"location":"setup/migrate_non_docker_to_docker/#prepare-mysql-and-the-folders-for-seafile-docker","title":"Prepare MySQL and the folders for Seafile docker","text":""},{"location":"setup/migrate_non_docker_to_docker/#add-permissions-to-the-local-mysql-seafile-user","title":"Add permissions to the local MySQL Seafile user","text":"The non-Docker version uses the local MySQL. Now if the Docker version of Seafile connects to this MySQL, you need to increase the corresponding access permissions.
The following commands are based on that you use seafile
as the user to access:
## Note, change the password according to the actual password you use\nGRANT ALL PRIVILEGES ON *.* TO 'seafile'@'%' IDENTIFIED BY 'your-password' WITH GRANT OPTION;\n\n## Grant seafile user can connect the database from any IP address\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seafile_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seahub_db`.* to 'seafile'@'%';\n\n## Restart MySQL\nsystemctl restart mariadb\n
"},{"location":"setup/migrate_non_docker_to_docker/#create-the-required-directories-for-seafile-docker-image","title":"Create the required directories for Seafile Docker image","text":"By default, we take /opt/seafile-data
as example.
mkdir -p /opt/seafile-data/seafile\n
"},{"location":"setup/migrate_non_docker_to_docker/#prepare-config-files","title":"Prepare config files","text":"Copy the original config files to the directory to be mapped by the docker version of seafile
cp -r /opt/seafile/conf /opt/seafile-data/seafile\ncp -r /opt/seafile/seahub-data /opt/seafile-data/seafile\n
Modify the MySQL configuration in /opt/seafile-data/seafile/conf
, including ccnet.conf
, seafile.conf
, seahub_settings
, change HOST=127.0.0.1
to HOST=<local ip>
.
Modify the memcached configuration in seahub_settings.py
to use the Docker version of Memcached: change it to 'LOCATION': 'memcached:11211'
(the network name of Docker version of Memcached is memcached
).
We recommond you download Seafile-docker yml into /opt/seafile-data
mkdir -p /opt/seafile-data\ncd /opt/seafile-data\n# e.g., pro edition\nwget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n\nnano .env\n
After downloading relative configuration files, you should also modify the .env
by following steps
Follow here to setup the database user infomations.
Mount the old Seafile data to the new Seafile server
SEAFILE_VOLUME=<old-Seafile-data>\n
"},{"location":"setup/migrate_non_docker_to_docker/#start-seafile-docker","title":"Start Seafile docker","text":"Start Seafile docker and check if everything is okay:
cd /opt/seafile-data\ndocker compose up -d\n
"},{"location":"setup/migrate_non_docker_to_docker/#security","title":"Security","text":"While it is not possible from inside a docker container to connect to the host database via localhost but via <local ip>
you also need to bind your databaseserver to that IP. If this IP is public, it is strongly advised to protect your database port with a firewall. Otherwise your databases are reachable via internet. An alternative might be to start another local IP from RFC 1597 e.g. 192.168.123.45
. Afterwards you can bind to that IP.
Following iptables commands protect MariaDB/MySQL:
iptables -A INPUT -s 172.16.0.0/12 -j ACCEPT #Allow Dockernetworks\niptables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\nip6tables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\n
Keep in mind this is not bootsafe!"},{"location":"setup/migrate_non_docker_to_docker/#binding-based","title":"Binding based","text":"For Debian based Linux Distros you can start a local IP by adding in /etc/network/interfaces
something like:
iface eth0 inet static\n address 192.168.123.45/32\n
eth0
might be ensXY
. Or if you know how to start a dummy interface, thats even better. SUSE based is by editing /etc/sysconfig/network/ifcfg-eth0
(ethXY/ensXY/bondXY)
If using MariaDB the server just can bind to one IP-address (192.158.1.38 or 0.0.0.0 (internet)). So if you bind your MariaDB server to that new address other applications might need some reconfigurement.
In /etc/mysql/mariadb.conf.d/50-server.cnf
edit the following line to:
bind-address = 192.168.123.45\n
then edit /opt/seafile-data/seafile/conf/ -> seafile.conf seahub_settings.py in the Host-Line to that IP and execute the following commands: service networking reload\nip a #to check whether the ip is present\nservice mysql restart\nss -tulpen | grep 3306 #to check whether the database listens on the correct IP\ncd /opt/seafile-data/\ndocker compose down\ndocker compose up -d\n\n## restart your applications\n
"},{"location":"setup/overview/","title":"Seafile Docker overview","text":"Seafile docker based installation consist of the following components (docker images):
SSL
configurationSeafile version 11.0 or later is required to work with SeaDoc
"},{"location":"setup/run_seafile_as_non_root_user_inside_docker/","title":"Run Seafile as non root user inside docker","text":"You can use run seafile as non root user in docker.
First add the NON_ROOT=true
to the .env
.
NON_ROOT=true\n
Then modify /opt/seafile-data/seafile/
permissions.
chmod -R a+rwx /opt/seafile-data/seafile/\n
Then destroy the containers and run them again:
docker compose down\ndocker compose up -d\n
Now you can run Seafile as seafile
user.
Tip
When doing maintenance, other scripts in docker are also required to be run as seafile
user, e.g. su seafile -c ./seaf-gc.sh
You can use one of the following methods to start Seafile container on system bootup.
"},{"location":"setup/seafile_docker_autostart/#modify-docker-composeservice","title":"Modify docker-compose.service","text":"Add docker-compose.service
vim /etc/systemd/system/docker-compose.service
[Unit]\nDescription=Docker Compose Application Service\nRequires=docker.service\nAfter=docker.service\n\n[Service]\nType=forking\nRemainAfterExit=yes\nWorkingDirectory=/opt/ \nExecStart=/usr/bin/docker compose up -d\nExecStop=/usr/bin/docker compose down\nTimeoutStartSec=0\n\n[Install]\nWantedBy=multi-user.target\n
Note
WorkingDirectory
is the absolute path to the seafile-server.yml
file directory.
Set the docker-compose.service
file to 644 permissions
chmod 644 /etc/systemd/system/docker-compose.service\n
Load autostart configuration
systemctl daemon-reload\nsystemctl enable docker-compose.service\n
Add configuration restart: unless-stopped
for each container in components of Seafile docker. Take seafile-server.yml
for example
services:\ndb:\n image: mariadb:10.11\n container_name: seafile-mysql-1\n restart: unless-stopped\n\nmemcached:\n image: memcached:1.6.18\n container_name: seafile-memcached\n restart: unless-stopped\n\nelasticsearch:\n image: elasticsearch:8.6.2\n container_name: seafile-elasticsearch\n restart: unless-stopped\n\nseafile:\n image: seafileltd/seafile-pro-mc:12.0-latest\n container_name: seafile\n restart: unless-stopped\n
Tip
Add restart: unless-stopped
, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
Seafile Community Edition requires a minimum of 2 cores and 2GB RAM.
"},{"location":"setup/setup_ce_by_docker/#getting-started","title":"Getting started","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile
is the directory for store Seafile docker compose files. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly./opt/seafile-mysql
and /opt/seafile-data
, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_ce_by_docker/#download-and-modify-env","title":"Download and modify.env
","text":"From Seafile Docker 12.0, we use .env
, seafile-server.yml
and caddy.yml
files for configuration
mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile CE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/ce/env\nwget https://manual.seafile.com/12.0/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n\nnano .env\n
The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
The root
password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME
The database name of ccnet ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
The database name of seafile seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
The database name of seahub seahub_db
JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
INIT_SEAFILE_ADMIN_EMAIL
Admin username me@example.com
(Recommend modifications) INIT_SEAFILE_ADMIN_PASSWORD
Admin password asecret
(Recommend modifications)"},{"location":"setup/setup_ce_by_docker/#start-seafile-server","title":"Start Seafile server","text":"Start Seafile server with the following command
docker compose up -d\n
Note
You must run the above command in the directory with the .env
. If .env
file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile
(i.e., docker logs seafile -f
)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com
to use Seafile.
/opt/seafile-data
","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log
./var/log
inside the container. /opt/seafile-data/logs/var-log/nginx
contains the logs of Nginx in the Seafile container.Tip
From Seafile Docker 12.0, we use the Caddy to do web service proxy. If you would like to access the logs of Caddy, you can use following command:
docker logs seafile-caddy --follow\n
"},{"location":"setup/setup_ce_by_docker/#find-logs","title":"Find logs","text":"To monitor container logs (from outside of the container), please use the following commands:
# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs seafile --follow\n
The Seafile logs are under /shared/logs/seafile
in the docker, or /opt/seafile-data/logs/seafile
in the server that run the docker.
The system logs are under /shared/logs/var-log
, or /opt/seafile-data/logs/var-log
in the server that run the docker.
To monitor all Seafile logs simultaneously (from outside of the container), run
sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n
"},{"location":"setup/setup_ce_by_docker/#more-configuration-options","title":"More configuration options","text":"The config files are under /opt/seafile-data/seafile/conf
. You can modify the configurations according to configuration section
Ensure the container is running, then enter this command:
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n
Enter the username and password according to the prompts. You now have a new admin account.
"},{"location":"setup/setup_ce_by_docker/#backup-and-recovery","title":"Backup and recovery","text":"Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_ce_by_docker/#garbage-collection","title":"Garbage collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_ce_by_docker/#faq","title":"FAQ","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n
The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f
.
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions.
"},{"location":"setup/setup_pro_by_docker/#requirements","title":"Requirements","text":"Seafile PE requires a minimum of 2 cores and 2GB RAM.
Other requirements for Seafile PE
If Elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM, and make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details):
sysctl -w vm.max_map_count=262144 #run as root\n
or modify /etc/sysctl.conf and reboot to set this value permanently:
nano /etc/sysctl.conf\n\n# modify vm.max_map_count\nvm.max_map_count=262144\n
About license
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. For futher details, please refer the license page of Seafile PE.
"},{"location":"setup/setup_pro_by_docker/#setup","title":"Setup","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile
is the directory of Seafile for storing Seafile docker files. If you decide to put Seafile in a different directory, adjust all paths accordingly.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_pro_by_docker/#downloading-the-seafile-image","title":"Downloading the Seafile Image","text":"docker pull seafileltd/seafile-pro-mc:12.0-latest\n
Note
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download.
Note
Older Seafile PE versions are available private docker repository (back to Seafile 7.0). You can get the username and password on the download page in the Customer Center.
"},{"location":"setup/setup_pro_by_docker/#downloading-and-modifying-env","title":"Downloading and Modifying.env
","text":"From Seafile Docker 12.0, we use .env
, seafile-server.yml
and caddy.yml
files for configuration.
mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile PE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n\nnano .env\n
The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
SEAFILE_ELASTICSEARCH_VOLUME
The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
The root
password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME
The database name of ccnet ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
The database name of seafile seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
The database name of seahub seahub_db
JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
INIT_SEAFILE_ADMIN_EMAIL
Synchronously set admin username during initialization me@example.com INIT_SEAFILE_ADMIN_PASSWORD
Synchronously set admin password during initialization asecret INIT_S3_STORAGE_BACKEND_CONFIG
Whether to configure S3 storage backend synchronously during initialization (i.e., the following variables with prefix INIT_S3_*
, for more details, please refer to AWS S3) false INIT_S3_COMMIT_BUCKET
S3 storage backend commit objects bucket (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_FS_BUCKET
S3 storage backend fs objects bucket (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_BLOCK_BUCKET
S3 storage backend block objects bucket (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_KEY_ID
S3 storage backend key ID (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_SECRET_KEY
S3 storage backend secret key (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_USE_V4_SIGNATURE
Use the v4 protocol of S3 if enabled (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) true
INIT_S3_AWS_REGION
Region of your buckets (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
and INIT_S3_USE_V4_SIGNATURE
sets to true
) us-east-1
INIT_S3_HOST
Host of your buckets (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
and INIT_S3_USE_V4_SIGNATURE
sets to true
) s3.us-east-1.amazonaws.com
INIT_S3_USE_HTTPS
Use HTTPS connections to S3 if enabled (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) true
To conclude, set the directory permissions of the Elasticsearch volumne:
mkdir -p /opt/seafile-elasticsearch/data\nchmod 777 -R /opt/seafile-elasticsearch/data\n
"},{"location":"setup/setup_pro_by_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"Run docker compose in detached mode:
docker compose up -d\n
Note
You must run the above command in the directory with the .env
. If .env
file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile
(i.e., docker logs seafile -f
)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com
to use Seafile.
A 502 Bad Gateway error means that the system has not yet completed the initialization
"},{"location":"setup/setup_pro_by_docker/#find-logs","title":"Find logs","text":"To view Seafile docker logs, please use the following command
docker compose logs -f\n
The Seafile logs are under /shared/logs/seafile
in the docker, or /opt/seafile-data/logs/seafile
in the server that run the docker.
The system logs are under /shared/logs/var-log
, or /opt/seafile-data/logs/var-log
in the server that run the docker.
If you have a seafile-license.txt
license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data
. If you have modified the path, save the license file under your custom path.
Then restart Seafile:
docker compose down\n\ndocker compose up -d\n
"},{"location":"setup/setup_pro_by_docker/#seafile-directory-structure","title":"Seafile directory structure","text":""},{"location":"setup/setup_pro_by_docker/#path-optseafile-data","title":"Path /opt/seafile-data
","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log
./var/log
inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/
.The command docker container list
should list the containers specified in the .env
.
The directory layout of the Seafile container's volume should look as follows:
$ tree /opt/seafile-data -L 2\n/opt/seafile-data\n\u251c\u2500\u2500 logs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 var-log\n\u251c\u2500\u2500 nginx\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 conf\n\u2514\u2500\u2500 seafile\n \u00a0\u00a0 \u251c\u2500\u2500 ccnet\n \u00a0\u00a0 \u251c\u2500\u2500 conf\n \u00a0\u00a0 \u251c\u2500\u2500 logs\n \u00a0\u00a0 \u251c\u2500\u2500 pro-data\n \u00a0\u00a0 \u251c\u2500\u2500 seafile-data\n \u00a0\u00a0 \u2514\u2500\u2500 seahub-data\n
All Seafile config files are stored in /opt/seafile-data/seafile/conf
. The nginx config file is in /opt/seafile-data/nginx/conf
.
Any modification of a configuration file requires a restart of Seafile to take effect:
docker compose restart\n
All Seafile log files are stored in /opt/seafile-data/seafile/logs
whereas all other log files are in /opt/seafile-data/logs/var-log
.
Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_pro_by_docker/#garbage-collection","title":"Garbage Collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_pro_by_docker/#faq","title":"FAQ","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n
The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data, /opt/seafile-elasticsearch, and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f
.
The entire db
service needs to be removed (or noted) in seafile-server.yml
if you would like to use an existing MySQL server, otherwise there is a redundant database service is running
service:\n\n # note or remove the entire `db` service\n #db:\n #image: ${SEAFILE_DB_IMAGE:-mariadb:10.11}\n #container_name: seafile-mysql\n # ... other parts in service `db`\n\n # do not change other services\n...\n
What's more, you have to modify the .env
to set correctly the fields with MySQL:
SEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\n
Tip
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
is needed during installation (i.e., the deployment in the first time). After Seafile is installed, the user seafile
will be used to connect to the MySQL server (SEAFILE_MYSQL_DB_PASSWORD), then you can remove the INIT_SEAFILE_MYSQL_ROOT_PASSWORD
.
Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS.
"},{"location":"setup/setup_with_ceph/#copy-ceph-conf-file-and-client-keyring","title":"Copy ceph conf file and client keyring","text":"Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine.
seafile-machine# sudo scp user@ceph-admin-node:/etc/ceph/ /etc\n
"},{"location":"setup/setup_with_ceph/#install-and-enable-memcached","title":"Install and enable memcached","text":"For best performance, Seafile requires install memcached or redis and enable cache for objects.
We recommend to allocate at least 128MB memory for object cache.
"},{"location":"setup/setup_with_ceph/#install-python-ceph-library","title":"Install Python Ceph Library","text":"File search and WebDAV functions rely on Python Ceph library installed in the system.
sudo apt-get install python3-rados\n
"},{"location":"setup/setup_with_ceph/#edit-seafile-configuration","title":"Edit seafile configuration","text":"Edit seafile.conf
, add the following lines:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-fs\n
You also need to add memory cache configurations
It's required to create separate pools for commit, fs, and block objects.
ceph-admin-node# rados mkpool seafile-blocks\nceph-admin-node# rados mkpool seafile-commits\nceph-admin-node# rados mkpool seafile-fs\n
Troubleshooting librados incompatibility issues
Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS.
To do this, you have to remove a few bundled libraries:
cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\n
"},{"location":"setup/setup_with_ceph/#use-arbitary-ceph-user","title":"Use arbitary Ceph user","text":"The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a ceph_client_id
option to seafile.conf, as the following:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\n
You can create a ceph user for seafile on your ceph cluster like this:
ceph auth add client.seafile \\\n mds 'allow' \\\n mon 'allow r' \\\n osd 'allow rwx pool=seafile-blocks, allow rwx pool=seafile-commits, allow rwx pool=seafile-fs'\n
You also have to add this user's keyring path to /etc/ceph/ceph.conf:
[client.seafile]\nkeyring = <path to user's keyring file>\n
"},{"location":"setup/setup_with_multiple_storage_backends/","title":"Multiple Storage Backend","text":"There are some use cases that supporting multiple storage backends in Seafile server is needed. Such as:
The library data in Seafile server are spreaded into multiple storage backends in the unit of libraries. All the data in a library will be located in the same storage backend. The mapping from library to its storage backend is stored in a database table. Different mapping policies can be chosen based on the use case.
To use this feature, you need to:
In Seafile server, a storage backend is represented by the concept of \"storage class\". A storage class is defined by specifying the following information:
storage_id
: an internal string ID to identify the storage class. It's not visible to users. For example \"primary storage\".name
: A user visible name for the storage class.is_default
: whether this storage class is the default. This option are effective in two cases:commits
\uff1athe storage for storing the commit objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.fs
\uff1athe storage for storing the fs objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.blocks
\uff1athe storage for storing the block objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes.
"},{"location":"setup/setup_with_multiple_storage_backends/#seafile-configuration","title":"Seafile Configuration","text":"As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before.
First, you have to enable this feature in seafile.conf.
[storage]\nenable_storage_classes = true\nstorage_classes_file = /opt/seafile_storage_classes.json\n
You also need to add memory cache configurations to seafile.conf
If installing Seafile as Docker containers, place the seafile_storage_classes.json
file on your local disk in a sub-directory of the location that is mounted to the seafile
container, and set the storage_classes_file
configuration above to a path relative to the /shared/
directory mounted on the seafile
container.
For example, if the configuration of the seafile
container in your docker-compose.yml
file is similar to the following:
// docker-compose.yml\nservices:\n seafile:\n container_name: seafile\n volumes:\n - /opt/seafile-data:/shared\n
Then place the JSON file within any sub-directory of /opt/seafile-data
(such as /opt/seafile-data/conf/
) and then configure seafile.conf
like so:
[storage]\nenable_storage_classes = true\nstorage_classes_file = /shared/conf/seafile_storage_classes.json\n
You also need to add memory cache configurations to seafile.conf
The JSON file is an array of objects. Each object defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class. Below is an example:
[\n {\n \"storage_id\": \"hot_storage\",\n \"name\": \"Hot Storage\",\n \"is_default\": true,\n \"commits\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-commits\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"fs\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-fs\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"blocks\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-blocks\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n }\n },\n {\n \"storage_id\": \"cold_storage\",\n \"name\": \"Cold Storage\",\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"commits\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seaflle-data\"\n }\n },\n {\n \"storage_id\": \"swift_storage\",\n \"name\": \"Swift Storage\",\n \"fs\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"commits\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-fs\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"blocks\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-blocks\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\",\n \"region\": \"RegionTwo\"\n }\n },\n {\n \"storage_id\": \"ceph_storage\",\n \"name\": \"ceph Storage\",\n \"fs\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-fs\"\n },\n \"commits\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-commits\"\n },\n \"blocks\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-blocks\"\n }\n }\n]\n
As you may have seen, the commits
, fs
and blocks
information syntax is similar to what is used in [commit_object_backend]
, [fs_object_backend]
and [block_backend]
section of seafile.conf. Refer to the detailed syntax in the documentation for the storage you use. For exampe, if you use S3 storage, refer to S3 Storage.
If you use file system as storage for fs
, commits
or blocks
, you must explicitly provide the path for the seafile-data
directory. The objects will be stored in storage/commits
, storage/fs
, storage/blocks
under this path.
Currently file system, S3 and Swift backends are supported. Ceph/RADOS is also supported since version 7.0.14
"},{"location":"setup/setup_with_multiple_storage_backends/#library-mapping-policies","title":"Library Mapping Policies","text":"Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases. The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later.
Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py:
ENABLE_STORAGE_CLASSES = True\n
"},{"location":"setup/setup_with_multiple_storage_backends/#user-chosen","title":"User Chosen","text":"This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file.
To use this policy, add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\n
If you enable storage class support but don't explicitly set STORAGE_CLASS_MAPPING_POLIICY
in seahub_settings.py, this policy is used by default.
Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles.
A new option storage_ids
is added to the role configuration in seahub_settings.py
to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
Here are the sample options in seahub_settings.py to use this policy:
ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'ROLE_BASED'\n\nENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': ['old_version_id', 'hot_storage', 'cold_storage', 'a_storage'],\n },\n 'guest': {\n 'can_add_repo': True,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': ['hot_storage', 'cold_storage'],\n },\n}\n
"},{"location":"setup/setup_with_multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes.
Note
This policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning.
To use this policy, you first add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'REPO_ID_MAPPING'\n
Then you can add option for_new_library
to the backends which are expected to store new libraries in json file:
[\n{\n\"storage_id\": \"new_backend\",\n\"name\": \"New store\",\n\"for_new_library\": true,\n\"is_default\": false,\n\"fs\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"commits\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"blocks\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"}\n}\n]\n
"},{"location":"setup/setup_with_multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"Run the migrate-repo.sh
script to migrate library data between different storage backends.
./migrate-repo.sh [repo_id] origin_storage_id destination_storage_id\n
repo_id is optional, if not specified, all libraries will be migrated.
Before running the migration script, you can set the OBJECT_LIST_FILE_PATH
environment variable to specify a path prefix to store the migrated object list.
For example:
export OBJECT_LIST_FILE_PATH=/opt/test\n
This will create three files in the specified path (/opt): test_4c731e5c-f589-4eaa-889f-14c00d4893cb.fs
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocks
Setting the OBJECT_LIST_FILE_PATH
environment variable has two purposes:
Run the remove-objs.sh
script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
./remove-objs.sh repo_id storage_id\n
"},{"location":"setup/setup_with_s3/","title":"Setup With S3 Storage","text":"Deployment notes
If your Seafile server is deployed from binary packages, you have to do the following steps before deploying:
install boto3
to your machine
sudo pip install boto3\n
Install and configure memcached or Redis.
For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.
The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. You also need to add memory cache configurations
New feature from 12.0 pro edition
If your will deploy Seafile server in Docker, you can modify the following fields in .env
before starting the services:
INIT_S3_STORAGE_BACKEND_CONFIG=true\nINIT_S3_COMMIT_BUCKET=<your-commit-objects>\nINIT_S3_FS_BUCKET=<your-fs-objects>\nINIT_S3_BLOCK_BUCKET=<your-block-objects>\nINIT_S3_KEY_ID=<your-key-id>\nINIT_S3_SECRET_KEY=<your-secret-key>\nINIT_S3_USE_V4_SIGNATURE=true\nINIT_S3_AWS_REGION=us-east-1 # your AWS Region\nINIT_S3_HOST=s3.us-east-1.amazonaws.com # your S3 Host\nINIT_S3_USE_HTTPS=true\n
The above modifications will generate the same configuration file as this manual and will take effect when the service is started for the first time.
"},{"location":"setup/setup_with_s3/#how-to-configure-s3-in-seafile","title":"How to configure S3 in Seafile","text":"Seafile configures S3 storage by adding or modifying the following section in seafile.conf
:
[xxx_object_backend]\nname = s3\nbucket = my-xxx-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\nuse_https = true\n... ; other optional configurations\n
You have to create at least 3 buckets for Seafile, corresponding to the sections: commit_object_backend
, fs_object_backend
and block_backend
. For the configurations for each backend section, please refer to the following table:
bucket
Bucket name for commit, fs, and block objects. Make sure it follows S3 naming rules (you can refer the notes below the table). key_id
The key_id
is required to authenticate you to S3. You can find the key_id
in the \"security credentials\" section on your AWS account page. key
The key
is required to authenticate you to S3. You can find the key
in the \"security credentials\" section on your AWS account page. use_v4_signature
There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. use_https
Use https to connect to S3. It's recommended to use https. aws_region
(Optional) If you use the v4 protocol and AWS S3, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use us-east-1
as the default. This option will be ignored if you use the v2 protocol. host
(Optional) The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address, otherwise Seafile will use AWS's address (i.e., s3.us-east-1.amazonaws.com
). sse_c_key
(Optional) A string of 32 characters can be generated by openssl rand -base64 24
. It's required to use V4 authentication protocol and https if you enable SSE-C. path_style_request
(Optional) This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object
to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object
. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true. Bucket naming conventions
No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).
Good naming of a bucketBad naming of a bucketSince Pro 11.0, you can use SSE-C to S3. Add the following sse_c_key
to seafile.conf (as shown in the above variables table):
[commit_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n
sse_c_key
is a string of 32 characters.
You can generate sse_c_key with the following command. Note that the key doesn't have to be base64 encoded. It can be any 32-character long random string. The example just show one possible way to generate such a key.
openssl rand -base64 24\n
Warning
If you have existing data in your S3 storage bucket, turning on the above configuration will make your data inaccessible. That's because Seafile server doesn't support encrypted and non-encrypted objects mixed in the same bucket. You have to create a new bucket, and migrate your data to it by following storage backend migration documentation.
"},{"location":"setup/setup_with_s3/#example","title":"Example","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n
[commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n
[commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n
"},{"location":"setup/setup_with_s3/#run-and-test","title":"Run and Test","text":"Now you can start Seafile and test
"},{"location":"setup/setup_with_swift/","title":"Setup With OpenStack Swift","text":"This backend uses the native Swift API. Previously users can only use the S3-compatibility layer of Swift. That way is obsolete now.
Since version 6.3, OpenStack Swift v3.0 API is supported.
"},{"location":"setup/setup_with_swift/#prepare","title":"Prepare","text":"To setup Seafile Professional Server with Swift:
Edit seafile.conf
, add the following lines:
[block_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-blocks\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[commit_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-commits\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[fs_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-fs\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n
You also need to add memory cache configurations
The above config is just an example. You should replace the options according to your own environment.
Seafile supports Swift with Keystone as authentication mechanism. The auth_host
option is the address and port of Keystone service.The region
option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.
Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The auth_ver
option should be set to v1.0
, tenant
and region
are no longer needed.
It's required to create separate containers for commit, fs, and block objects.
"},{"location":"setup/setup_with_swift/#use-https-connections-to-swift","title":"Use HTTPS connections to Swift","text":"Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf:
[commit_object_backend]\nname = swift\n......\nuse_https = true\n\n[fs_object_backend]\nname = swift\n......\nuse_https = true\n\n[block_backend]\nname = swift\n......\nuse_https = true\n
Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail.
sudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n
"},{"location":"setup/setup_with_swift/#run-and-test","title":"Run and Test","text":"Now you can start Seafile by ./seafile.sh start
and ./seahub.sh start
and visit the website.
Since Seafile 12.0, all reverse proxy, HTTPS, etc. processing for single-node deployment based on Docker is handled by caddy. If you need to use other reverse proxy services, you can refer to this document to modify the relevant configuration files.
"},{"location":"setup/use_other_reverse_proxy/#services-that-require-reverse-proxy","title":"Services that require reverse proxy","text":"Before making changes to the configuration files, you have to know the services used by Seafile and related components (Table 1 therafter).
Tip
The services shown in the table below are all based on the single-node integrated deployment in accordance with the Seafile official documentation.
If these services are deployed in standalone mode (such as seadoc and notification-server), or deployed in the official documentation of third-party plugins (such as onlyoffice and collabora), you can skip modifying the configuration files of these services (because Caddy is not used as a reverse proxy for such deployment approaches).
If you have not integrated some services, please choose Standalone or Refer to the official documentation of third-party plugins to install them when you need these services
YML Service Suggest exposed port Service listen port Require WebSocketseafile-server.yml
seafile 80 80 No seadoc.yml
seadoc 8888 80 Yes notification-server.yml
notification-server 8083 8083 Yes collabora.yml
collabora 6232 9980 No onlyoffice.yml
onlyoffice 6233 80 No"},{"location":"setup/use_other_reverse_proxy/#modify-yml-files","title":"Modify YML files","text":"Refer to Table 1 for the related service exposed ports. Add section ports
for corresponding services
services:\n <the service need to be modified>:\n ...\n ports:\n - \"<Suggest exposed port>:<Service listen port>\"\n
Delete all fields related to Caddy reverse proxy (in label
section)
Tip
Some .yml
files (e.g., onlyoffice.yml
) also have port-exposing information with Caddy in the top of the file, which also needs to be removed.
We take seafile-server.yml
for example (Pro edition):
services:\n # ... other services\n\n seafile:\n image: ${SEAFILE_IMAGE:-seafileltd/seafile-pro-mc:12.0-latest}\n container_name: seafile\n ports:\n - \"80:80\"\n volumes:\n - ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared\n environment:\n - DB_HOST=${SEAFILE_MYSQL_DB_HOST:-db}\n - DB_PORT=${SEAFILE_MYSQL_DB_PORT:-3306}\n - DB_ROOT_PASSWD=${INIT_SEAFILE_MYSQL_ROOT_PASSWORD:-}\n - DB_PASSWORD=${SEAFILE_MYSQL_DB_PASSWORD:?Variable is not set or empty}\n - SEAFILE_MYSQL_DB_CCNET_DB_NAME=${SEAFILE_MYSQL_DB_CCNET_DB_NAME:-ccnet_db}\n - SEAFILE_MYSQL_DB_SEAFILE_DB_NAME=${SEAFILE_MYSQL_DB_SEAFILE_DB_NAME:-seafile_db}\n - SEAFILE_MYSQL_DB_SEAHUB_DB_NAME=${SEAFILE_MYSQL_DB_SEAHUB_DB_NAME:-seahub_db}\n - TIME_ZONE=${TIME_ZONE:-Etc/UTC}\n - INIT_SEAFILE_ADMIN_EMAIL=${INIT_SEAFILE_ADMIN_EMAIL:-me@example.com}\n - INIT_SEAFILE_ADMIN_PASSWORD=${INIT_SEAFILE_ADMIN_PASSWORD:-asecret}\n - SEAFILE_SERVER_HOSTNAME=${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}\n - SEAFILE_SERVER_PROTOCOL=${SEAFILE_SERVER_PROTOCOL:-http}\n - SITE_ROOT=${SITE_ROOT:-/}\n - NON_ROOT=${NON_ROOT:-false}\n - JWT_PRIVATE_KEY=${JWT_PRIVATE_KEY:?Variable is not set or empty}\n - ENABLE_SEADOC=${ENABLE_SEADOC:-false}\n - SEADOC_SERVER_URL=${SEADOC_SERVER_URL:-http://example.example.com/sdoc-server}\n - INIT_S3_STORAGE_BACKEND_CONFIG=${INIT_S3_STORAGE_BACKEND_CONFIG:-false}\n - INIT_S3_COMMIT_BUCKET=${INIT_S3_COMMIT_BUCKET:-}\n - INIT_S3_FS_BUCKET=${INIT_S3_FS_BUCKET:-}\n - INIT_S3_BLOCK_BUCKET=${INIT_S3_BLOCK_BUCKET:-}\n - INIT_S3_KEY_ID=${INIT_S3_KEY_ID:-}\n - INIT_S3_SECRET_KEY=${INIT_S3_SECRET_KEY:-}\n - INIT_S3_USE_V4_SIGNATURE=${INIT_S3_USE_V4_SIGNATURE:-true}\n - INIT_S3_AWS_REGION=${INIT_S3_AWS_REGION:-us-east-1}\n - INIT_S3_HOST=${INIT_S3_HOST:-us-east-1}\n - INIT_S3_USE_HTTPS=${INIT_S3_USE_HTTPS:-true}\n # please remove the label section\n depends_on:\n - db\n - memcached\n - elasticsearch\n networks:\n - seafile-net\n\n# ... other options\n
"},{"location":"setup/use_other_reverse_proxy/#add-reverse-proxy-for-related-services","title":"Add reverse proxy for related services","text":"Modify nginx.conf
and add reverse proxy for services seafile and seadoc:
Note
If your proxy server's host is not the same as the host the Seafile deployed to, please replase 127.0.0.1
to your Seafile server's host
location / {\n proxy_pass http://127.0.0.1:80;\n proxy_read_timeout 310s;\n proxy_set_header Host $host;\n proxy_set_header Forwarded \"for=$remote_addr;proto=$scheme\";\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header Connection \"\";\n proxy_http_version 1.1;\n\n client_max_body_size 0;\n}\n
location /sdoc-server/ {\n proxy_pass http://127.0.0.1:8888/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n client_max_body_size 100m;\n}\n\nlocation /socket.io {\n proxy_pass http://127.0.0.1:8888/;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n}\n
"},{"location":"setup/use_other_reverse_proxy/#modify-env","title":"Modify .env","text":"Remove caddy.yml
from field COMPOSE_FILE
in .env
, e.g.
COMPOSE_FILE='seafile-server.yml' # remove caddy.yml\n
"},{"location":"setup/use_other_reverse_proxy/#restart-services-and-nginx","title":"Restart services and nginx","text":"docker compose down\ndocker compose up -d\nnginx restart\n
"},{"location":"setup_binary/deploy_in_a_cluster/","title":"Deploy in a cluster","text":"Tip
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
"},{"location":"setup_binary/deploy_in_a_cluster/#architecture","title":"Architecture","text":"The Seafile cluster solution employs a 3-tier architecture:
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later.
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it. More details can be found in background server setup.
All Seafile app servers access the same set of user data. The user data has two parts: One in the MySQL database and the other one in the backend storage cluster (S3, Ceph etc.). All app servers serve the data equally to the clients.
All app servers have to connect to the same database or database cluster. We recommend to use MariaDB Galera Cluster if you need a database cluster.
There are a few steps to deploy a Seafile cluster:
At least 3 Linux server with at least 4 cores, 8GB RAM. Two servers work as frontend servers, while one server works as background task server. Virtual machines are sufficient for most cases.
In small cluster, you can re-use the 3 Seafile servers to run memcached cluster and MariaDB cluster. For larger clusters, you can have 3 more dedicated server to run memcached cluster and MariaDB cluster. Because the load on these two clusters are not high, they can share the hardware to save cost. Documentation about how to setup memcached cluster and MariaDB cluster can be found here.
Since version 11.0, Redis can also be used as memory cache server. But currently only single-node Redis is supported.
"},{"location":"setup_binary/deploy_in_a_cluster/#install-python-libraries","title":"Install Python libraries","text":"On each mode, you need to install some python libraries.
First make sure your have installed Python 2.7, then:
sudo easy_install pip\nsudo pip install boto\n
If you receive an error stating \"Wheel installs require setuptools >= ...\", run this between the pip and boto lines above
sudo pip install setuptools --no-use-wheel --upgrade\n
"},{"location":"setup_binary/deploy_in_a_cluster/#configure-a-single-node","title":"Configure a Single Node","text":"You should make sure the config files on every Seafile server are consistent.
"},{"location":"setup_binary/deploy_in_a_cluster/#get-the-license","title":"Get the license","text":"Put the license you get under the top level diretory. In our wiki, we use the diretory /data/haiwen/
as the top level directory.
tar xf seafile-pro-server_8.0.0_x86-64.tar.gz\n
Now you have:
haiwen\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.0/\n
"},{"location":"setup_binary/deploy_in_a_cluster/#setup-seafile","title":"Setup Seafile","text":"Please follow Download and Setup Seafile Professional Server With MySQL to setup a single Seafile server node.
Use the load balancer's address or domain name for the server address. Don't use the local IP address of each Seafile server machine. This assures the user will always access your service via the load balancers
After the setup process is done, you still have to do a few manual changes to the config files.
"},{"location":"setup_binary/deploy_in_a_cluster/#seafileconf","title":"seafile.conf","text":"If you use a single memcached server, you have to add the following configuration to seafile.conf
[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=192.168.1.134 --POOL-MIN=10 --POOL-MAX=100\n
If you use memcached cluster, the recommended way to setup memcached clusters can be found here.
You'll setup two memcached server, in active/standby mode. A floating IP address will be assigned to the current active memcached node. So you have to configure the address in seafile.conf accordingly.
[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<floating IP address> --POOL-MIN=10 --POOL-MAX=100\n
If you are using Redis as cache, add following configurations:
[cluster]\nenabled = true\n\n[redis]\n# your redis server address\nredis_server = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n
Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
(Optional) The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config option to seafile.conf
[cluster]\nhealth_check_port = 12345\n
"},{"location":"setup_binary/deploy_in_a_cluster/#seahub_settingspy","title":"seahub_settings.py","text":"You must setup and use memory cache when deploying Seafile cluster. Refer to \"memory cache\" to configure memory cache in Seahub.
Also add following options to seahub_setting.py. These settings tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n
"},{"location":"setup_binary/deploy_in_a_cluster/#seafeventsconf","title":"seafevents.conf","text":"Here is an example [INDEX FILES]
section:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is only available for Seafile 6.3.0 pro and above.\nindex_office_pdf = true\nes_host = background.seafile.com\nes_port = 9200\n
Tip
enable = true
should be left unchanged. It means the file search feature is enabled.
In cluster environment, we have to store avatars in the database instead of in a local disk.
CREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n
"},{"location":"setup_binary/deploy_in_a_cluster/#backend-storage-settings","title":"Backend Storage Settings","text":"You also need to add the settings for backend cloud storage systems to the config files.
Nginx/Apache with HTTP need to set it up on each machine running Seafile server. This is make sure only port 80 need to be exposed to load balancer. (HTTPS should be setup at the load balancer)
Please check the following documents on how to setup HTTP with Nginx/Apache. (HTTPS is not needed)
Once you have finished configuring this single node, start it to test if it runs properly:
cd /data/haiwen/seafile-server-latest\n./seafile.sh start\n./seahub.sh start\n
Success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server.
Open your browser, visit http://ip-address-of-this-node:80
and login with the admin account.
Now you have one node working fine, let's continue to configure more nodes.
"},{"location":"setup_binary/deploy_in_a_cluster/#copy-the-config-to-all-seafile-servers","title":"Copy the config to all Seafile servers","text":"Supposed your Seafile installation directory is /data/haiwen
, compress this whole directory into a tarball and copy the tarball to all other Seafile server machines. You can simply uncompress the tarball and use it.
On each node, run ./seafile.sh
and ./seahub.sh
to start Seafile server.
In the backend node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server.
export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n
"},{"location":"setup_binary/deploy_in_a_cluster/#start-seafile-service-on-boot","title":"Start Seafile Service on boot","text":"It would be convenient to setup Seafile service to start on system boot. Follow this documentation to set it up on all nodes.
"},{"location":"setup_binary/deploy_in_a_cluster/#firewall-settings","title":"Firewall Settings","text":"There are 2 firewall rule changes for Seafile cluster:
Now that your cluster is already running, fire up the load balancer and welcome your users. Since version 6.0.0, Seafile Pro requires \"sticky session\" settings in the load balancer. You should refer to the manual of your load balancer for how to set up sticky sessions.
"},{"location":"setup_binary/deploy_in_a_cluster/#aws-elastic-load-balancer-elb","title":"AWS Elastic Load Balancer (ELB)","text":"In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
Refer to AWS documentation about how to setup sticky sessions.
"},{"location":"setup_binary/deploy_in_a_cluster/#haproxy","title":"HAProxy","text":"This is a sample /etc/haproxy/haproxy.cfg
:
(Assume your health check port is 11001
)
global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n
"},{"location":"setup_binary/deploy_in_a_cluster/#see-how-it-runs","title":"See how it runs","text":"Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients.
If the above works, the next step would be Enable search and background tasks in a cluster.
"},{"location":"setup_binary/deploy_in_a_cluster/#the-final-configuration-of-the-front-end-nodes","title":"The final configuration of the front-end nodes","text":"Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For seafile.conf:
[cluster]\nenabled = true\nmemcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100\n
The enabled
option will prevent the start of background tasks by ./seafile.sh start
in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start
at the back-end node.
For seahub_settings.py:
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n
For seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is for improving searching speed\nes_host = <IP of background node>\nes_port = 9200\n
The [INDEX FILES]
section is needed to let the front-end node know the file search feature is enabled.
In the seafile cluster, only one server should run the background tasks, including:
Let's assume you have three nodes in your cluster: A, B, and C.
If you following the steps on settings up a cluster, node B and node C should have already be configed as frontend node. You can copy the configuration of node B as a base for node A. Then do the following steps:
Since 9.0, ElasticSearch program is not part of Seafile package. You should deploy ElasticSearch service seperately. Then edit seafevents.conf
, add the following lines:
[INDEX FILES]\nenabled = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n
Edit seafile.conf to enable virus scan according to virus scan document
"},{"location":"setup_binary/enable_search_and_background_tasks_in_a_cluster/#configure-other-nodes","title":"Configure Other Nodes","text":"On nodes B and C, you need to:
Edit seafevents.conf
, add the following lines:
[INDEX FILES]\nenabled = true\nes_host = <ip of elastic search service>\nes_port = 9200\n
"},{"location":"setup_binary/enable_search_and_background_tasks_in_a_cluster/#start-the-background-node","title":"Start the background node","text":"Type the following commands to start the background node (Note, one additional command seafile-background-tasks.sh
is needed)
export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n
To stop the background node, type:
./seafile-background-tasks.sh stop\n./seafile.sh stop\n
You should also configure Seafile background tasks to start on system bootup. For systemd based OS, you can add /etc/systemd/system/seafile-background-tasks.service
:
[Unit]\nDescription=Seafile Background Tasks Server\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh start\nExecStop=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh stop\nUser=root\nGroup=root\n\n[Install]\nWantedBy=multi-user.target\n
Then enable this task in systemd:
systemctl enable seafile-background-tasks.service\n
"},{"location":"setup_binary/enable_search_and_background_tasks_in_a_cluster/#the-final-configuration-of-the-background-node","title":"The final configuration of the background node","text":"Here is the summary of configurations at the background node that related to clustering setup.
For seafile.conf:
[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<you memcached server host> --POOL-MIN=10 --POOL-MAX=100\n
For seafevents.conf:
[INDEX FILES]\nenabled = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n
"},{"location":"setup_binary/fail2ban/","title":"seafile-authentication-fail2ban","text":""},{"location":"setup_binary/fail2ban/#what-is-fail2ban","title":"What is fail2ban ?","text":"Fail2ban is an intrusion prevention software framework which protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.
(Definition from wikipedia - https://en.wikipedia.org/wiki/Fail2ban)
"},{"location":"setup_binary/fail2ban/#why-do-i-need-to-install-this-fail2bans-filter","title":"Why do I need to install this fail2ban's filter ?","text":"To protect your seafile website against brute force attemps. Each time a user/computer tries to connect and fails 3 times, a new line will be write in your seafile logs (seahub.log
).
Fail2ban will check this log file and will ban all failed authentications with a new rule in your firewall.
"},{"location":"setup_binary/fail2ban/#installation","title":"Installation","text":""},{"location":"setup_binary/fail2ban/#change-to-right-time-zone-in-seahub_settingspy","title":"Change to right Time Zone in seahub_settings.py","text":"Without this your Fail2Ban filter will not work
You need to add the following settings to seahub_settings.py but change it to your own time zone.
# TimeZone\n TIME_ZONE = 'Europe/Stockholm'\n
"},{"location":"setup_binary/fail2ban/#copy-and-edit-jaillocal-file","title":"Copy and edit jail.local file","text":"this file may override some parameters from your jail.conf
file
Edit jail.local
with : * ports used by your seafile website (e.g. http,https
) ; * logpath (e.g. /home/yourusername/logs/seahub.log
) ; * maxretry (default to 3 is equivalent to 9 real attemps in seafile, because one line is written every 3 failed authentications into seafile logs).
jail.local
in /etc/fail2ban
with the following content:","text":"# All standard jails are in the file configuration located\n# /etc/fail2ban/jail.conf\n\n# Warning you may override any other parameter (e.g. banaction,\n# action, port, logpath, etc) in that section within jail.local\n\n# Change logpath with your file log used by seafile (e.g. seahub.log)\n# Also you can change the max retry var (3 attemps = 1 line written in the\n# seafile log)\n# So with this maxrety to 1, the user can try 3 times before his IP is banned\n\n[seafile]\n\nenabled = true\nport = http,https\nfilter = seafile-auth\nlogpath = /home/yourusername/logs/seahub.log\nmaxretry = 3\n
"},{"location":"setup_binary/fail2ban/#create-the-fail2ban-filter-file-seafile-authconf-in-etcfail2banfilterd-with-the-following-content","title":"Create the fail2ban filter file seafile-auth.conf
in /etc/fail2ban/filter.d
with the following content:","text":"# Fail2Ban filter for seafile\n#\n\n[INCLUDES]\n\n# Read common prefixes. If any customizations available -- read them from\n# common.local\nbefore = common.conf\n\n[Definition]\n\n_daemon = seaf-server\n\nfailregex = Login attempt limit reached.*, ip: <HOST>\n\nignoreregex = \n\n# DEV Notes:\n#\n# pattern : 2015-10-20 15:20:32,402 [WARNING] seahub.auth.views:155 login Login attempt limit reached, username: <user>, ip: 1.2.3.4, attemps: 3\n# 2015-10-20 17:04:32,235 [WARNING] seahub.auth.views:163 login Login attempt limit reached, ip: 1.2.3.4, attempts: 3\n
"},{"location":"setup_binary/fail2ban/#restart-fail2ban","title":"Restart fail2ban","text":"Finally, just restart fail2ban and check your firewall (iptables for me) :
sudo fail2ban-client reload\nsudo iptables -S\n
Fail2ban will create a new chain for this jail. So you should see these new lines :
...\n-N fail2ban-seafile\n...\n-A fail2ban-seafile -j RETURN\n
"},{"location":"setup_binary/fail2ban/#tests","title":"Tests","text":"To do a simple test (but you have to be an administrator on your seafile server) go to your seafile webserver URL and try 3 authentications with a wrong password.
Actually, when you have done that, you are banned from http and https ports in iptables, thanks to fail2ban.
To check that :
on fail2ban
denis@myserver:~$ sudo fail2ban-client status seafile\nStatus for the jail: seafile\n|- filter\n| |- File list: /home/<youruser>/logs/seahub.log\n| |- Currently failed: 0\n| `- Total failed: 1\n`- action\n |- Currently banned: 1\n | `- IP list: 1.2.3.4\n `- Total banned: 1\n
on iptables :
sudo iptables -S\n\n...\n-A fail2ban-seafile -s 1.2.3.4/32 -j REJECT --reject-with icmp-port-unreachable\n...\n
To unban your IP address, just execute this command :
sudo fail2ban-client set seafile unbanip 1.2.3.4\n
Tip
As three (3) failed attempts to login will result in one line added in seahub.log a Fail2Ban jail with the settings maxretry = 3 is the same as nine (9) failed attempts to login.
"},{"location":"setup_binary/https_with_apache/","title":"Enabling HTTPS with Apache","text":"After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Apache, a popular web server and reverse proxy, is a good option. The full documentation of Apache is available at https://httpd.apache.org/docs/.
The recommended reverse proxy is Nginx. You find instructions for enabling HTTPS with Nginx here.
"},{"location":"setup_binary/https_with_apache/#setup","title":"Setup","text":"The setup of Seafile using Apache as a reverse proxy with HTTPS is demonstrated using the sample host name seafile.example.com
.
This manual assumes the following requirements:
If your setup differs from thes requirements, adjust the following instructions accordingly.
The setup proceeds in two steps: First, Apache is installed. Second, a SSL certificate is integrated in the Apache configuration.
"},{"location":"setup_binary/https_with_apache/#installing-apache","title":"Installing Apache","text":"Install and enable apache modules:
# Ubuntu\n$ sudo a2enmod rewrite\n$ sudo a2enmod proxy_http\n
Important: Due to the security advisory published by Django team, we recommend to disable GZip compression to mitigate BREACH attack. No version earlier than Apache 2.4 should be used.
"},{"location":"setup_binary/https_with_apache/#configuring-apache","title":"Configuring Apache","text":"Modify Apache config file. For CentOS, this is vhost.conf.
For Debian/Ubuntu, this is sites-enabled/000-default
.
<VirtualHost *:80>\n ServerName seafile.example.com\n # Use \"DocumentRoot /var/www/html\" for CentOS\n # Use \"DocumentRoot /var/www\" for Debian/Ubuntu\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n AllowEncodedSlashes On\n\n RewriteEngine On\n\n <Location /media>\n Require all granted\n </Location>\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\n
"},{"location":"setup_binary/https_with_apache/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your web server and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Apache configuration yourself:
sudo certbot --apache certonly\n
Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live
. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com
.
To use HTTPS, you need to enable mod_ssl:
$ sudo a2enmod ssl\n
Then modify your Apache configuration file. Here is a sample:
<VirtualHost *:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n\n SSLEngine On\n SSLCertificateFile /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n SSLCertificateKeyFile /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n <Location /media>\n Require all granted\n </Location>\n\n RewriteEngine On\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\n
Finally, make sure the virtual host file does not contain syntax errors and restart Apache for the configuration changes to take effect:
sudo service apache2 restart\n
"},{"location":"setup_binary/https_with_apache/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":"The SERVICE_URL
in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URL
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http://
must not be removed):
SERVICE_URL = 'https://seafile.example.com'\n
The FILE_SERVER_ROOT
in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOT
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp
must not be removed):
FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\n
Note: The SERVICE_URL
and FILE_SERVER_ROOT
can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.
To improve security, the file server should only be accessible via Apache.
Add the following line in the [fileserver] block on seafile.conf
in /opt/seafile/conf
:
host = 127.0.0.1 ## default port 0.0.0.0\n
After his change, the file server only accepts requests from Apache.
"},{"location":"setup_binary/https_with_apache/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"Restart the seaf-server and Seahub for the config changes to take effect:
$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart\n
"},{"location":"setup_binary/https_with_apache/#troubleshooting","title":"Troubleshooting","text":"If there are problems with paths or files containing spaces, make sure to have at least Apache 2.4.12.
References
After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/.
If you prefer Apache, you find instructions for enabling HTTPS with Apache here.
"},{"location":"setup_binary/https_with_nginx/#setup","title":"Setup","text":"The setup of Seafile using Nginx as a reverse proxy with HTTPS is demonstrated using the sample host name seafile.example.com
.
This manual assumes the following requirements:
If your setup differs from thes requirements, adjust the following instructions accordingly.
The setup proceeds in two steps: First, Nginx is installed. Second, a SSL certificate is integrated in the Nginx configuration.
"},{"location":"setup_binary/https_with_nginx/#installing-nginx","title":"Installing Nginx","text":"Install Nginx using the package repositories:
CentOSDebian$ sudo yum install nginx -y\n
$ sudo apt install nginx -y\n
After the installation, start the server and enable it so that Nginx starts at system boot:
$ sudo systemctl start nginx\n$ sudo systemctl enable nginx\n
"},{"location":"setup_binary/https_with_nginx/#preparing-nginx","title":"Preparing Nginx","text":"The configuration of a proxy server in Nginx differs slightly between CentOS and Debian/Ubuntu. Additionally, the restrictive default settings of SELinux's configuration on CentOS require a modification.
"},{"location":"setup_binary/https_with_nginx/#preparing-nginx-on-centos","title":"Preparing Nginx on CentOS","text":"Switch SELinux into permissive mode and perpetuate the setting:
$ sudo setenforce permissive\n$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config\n
Create a configuration file for seafile in /etc/nginx/conf.d
:
$ touch /etc/nginx/conf.d/seafile.conf\n
"},{"location":"setup_binary/https_with_nginx/#preparing-nginx-on-debianubuntu","title":"Preparing Nginx on Debian/Ubuntu","text":"Create a configuration file for seafile in /etc/nginx/sites-available/
:
$ touch /etc/nginx/sites-available/seafile.conf\n
Delete the default files in /etc/nginx/sites-enabled/
and /etc/nginx/sites-available
:
$ rm /etc/nginx/sites-enabled/default\n$ rm /etc/nginx/sites-available/default\n
Create a symbolic link:
$ ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/seafile.conf\n
"},{"location":"setup_binary/https_with_nginx/#configuring-nginx","title":"Configuring Nginx","text":"Copy the following sample Nginx config file into the just created seafile.conf
and modify the content to fit your needs:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n
The following options must be modified in the CONF file:
Optional customizable options in the seafile.conf are:
listen
) - if Seafile server should be available on a non-standard port/
- if Seahub is configured to start on a different port than 8000/seafhttp
- if seaf-server is configured to start on a different port than 8082client_max_body_size
)The default value for client_max_body_size
is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size
in section [fileserver]
of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
$ nginx -t\n$ nginx -s reload\n
"},{"location":"setup_binary/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your webserver and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Nginx configuration yourself:
$ sudo certbot certonly --nginx\n
Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live
. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com
.
Add an server block for port 443 and a http-to-https redirect to the seafile.conf
configuration file in /etc/nginx
.
This is a (shortened) sample configuration for the host name seafile.example.com:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n
"},{"location":"setup_binary/https_with_nginx/#large-file-uploads","title":"Large file uploads","text":"Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file:
location /seafhttp {\n ... ...\n proxy_request_buffering off;\n }\n
If you have WebDAV enabled it is recommended to add the same:
location /seafdav {\n ... ...\n proxy_request_buffering off;\n }\n
"},{"location":"setup_binary/https_with_nginx/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":"The SERVICE_URL
in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URL
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http://
must not be removed):
SERVICE_URL = 'https://seafile.example.com'\n
The FILE_SERVER_ROOT
in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOT
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp
must not be removed):
FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\n
Note: The SERVICE_URL
and FILE_SERVER_ROOT
can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.
To improve security, the file server should only be accessible via Nginx.
Add the following line in the [fileserver]
block on seafile.conf
in /opt/seafile/conf
:
host = 127.0.0.1 ## default port 0.0.0.0\n
After his change, the file server only accepts requests from Nginx.
"},{"location":"setup_binary/https_with_nginx/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"Restart the seaf-server and Seahub for the config changes to take effect:
$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n
"},{"location":"setup_binary/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"setup_binary/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage.
listen 443;\nlisten [::]:443;\n
"},{"location":"setup_binary/https_with_nginx/#activating-http2","title":"Activating HTTP2","text":"Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add http2
.
listen 443 http2;\nlisten [::]:443 http2;\n
"},{"location":"setup_binary/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in seafile.conf
, this rating can be significantly improved.
The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below.
server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\n
"},{"location":"setup_binary/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":"Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive:
add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n
HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site.
"},{"location":"setup_binary/https_with_nginx/#using-perfect-forward-secrecy","title":"Using Perfect Forward Secrecy","text":"Enable Diffie-Hellman (DH) key-exchange. Generate DH parameters and write them in a .pem file using the following command:
$ openssl dhparam 2048 > /etc/nginx/dhparam.pem # Generates DH parameter of length 2048 bits\n
The generation of the the DH parameters may take some time depending on the server's processing power.
Add the following directive in the HTTPS server block:
ssl_dhparam /etc/nginx/dhparam.pem;\n
"},{"location":"setup_binary/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information.
"},{"location":"setup_binary/installation_ce/","title":"Installation of Seafile Server Community Edition with MySQL/MariaDB","text":"This manual explains how to deploy and run Seafile Server Community Edition (Seafile CE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
"},{"location":"setup_binary/installation_ce/#requirements","title":"Requirements","text":"Seafile CE for x86 architecture requires a minimum of 2 cores and 2GB RAM.
"},{"location":"setup_binary/installation_ce/#setup","title":"Setup","text":""},{"location":"setup_binary/installation_ce/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution.
You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website.
Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on CentOS 8, Debian 10, and Ubuntu 20.04 use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it.
"},{"location":"setup_binary/installation_ce/#installing-prerequisites","title":"Installing prerequisites","text":"Seafile 10.0.xSeafile 11.0.x Ubuntu 22.04/Ubuntu 20.04/Debian 11/Debian 10sudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n
Debian 11/Ubuntu 22.04Debian 12Ubuntu 24.04 with virtual env # Ubuntu 22.04 (almost the same for Ubuntu 20.04 and Debian 11, Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
# Debian 12\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n
"},{"location":"setup_binary/installation_ce/#creating-the-program-directory","title":"Creating the program directory","text":"The standard directory for Seafile's program files is /opt/seafile
. Create this directory and change into it:
sudo mkdir /opt/seafile\ncd /opt/seafile\n
Tip
The program directory can be changed. The standard directory /opt/seafile
is assumed for the rest of this manual. If you decide to put Seafile in another directory, modify the commands accordingly.
It is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
sudo adduser seafile\n
Change ownership of the created directory to the new user:
sudo chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/installation_ce/#downloading-the-install-package","title":"Downloading the install package","text":"Download the install package from the download page on Seafile's website using wget.
We use Seafile CE version 8.0.4 as an example in the rest of this manual.
"},{"location":"setup_binary/installation_ce/#uncompressing-the-package","title":"Uncompressing the package","text":"The install package is downloaded as a compressed tarball which needs to be uncompressed.
Uncompress the package using tar:
tar xf seafile-server_8.0.4_x86-64.tar.gz\n
Now you have:
$ tree -L 2\n.\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-server_8.0.4_x86-64.tar.gz\n
"},{"location":"setup_binary/installation_ce/#setting-up-seafile-ce","title":"Setting up Seafile CE","text":"The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require:
While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being
Run the script as user seafile:
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\ncd seafile-server-8.0.4\n./setup-seafile-mysql.sh\n
Configure your Seafile Server by specifying the following three parameters:
Option Description Note server name Name of the Seafile Server 3-15 characters, only English letters, digits and underscore ('_') are allowed server's ip or domain IP address or domain name used by the Seafile Server Seafile client program will access the server using this address fileserver port TCP port used by the Seafile fileserver Default port is 8082, it is recommended to use this port and to only change it if is used by other serviceIn the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server.
When choosing \"[1] Create new ccnet/seafile/seahub databases\", the script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by the MySQL server Default port is 3306; almost every MySQL server uses this port mysql root password Password of the MySQL root account The root password is required to create new databases and a MySQL user mysql user for Seafile MySQL user created by the script, used by Seafile's components to access the databases Default is seafile; the user is created unless it exists mysql password for Seafile user Password for the user above, written in Seafile's config files Percent sign ('%') is not allowed database name Name of the database used by ccnet Default is \"ccnet_db\", the database is created if it does not exist seafile database name Name of the database used by Seafile Default is \"seafile_db\", the database is created if it does not exist seahub database name Name of the database used by seahub Default is \"seahub_db\", the database is created if it does not existWhen choosing \"[2] Use existing ccnet/seafile/seahub databases\", this are the prompts you need to answer:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by MySQL server Default port is 3306; almost every MySQL server uses this port mysql user for Seafile User used by Seafile's components to access the databases The user must exists mysql password for Seafile user Password for the user above ccnet database name Name of the database used by ccnet, default is \"ccnet_db\" The database must exist seafile database name Name of the database used by Seafile, default is \"seafile_db\" The database must exist seahub dabase name Name of the database used by Seahub, default is \"seahub_db\" The database must existIf the setup is successful, you see the following output:
The directory layout then looks as follows:
$ tree /opt/seafile -L 2\nseafile\n\u251c\u2500\u2500 ccnet\n\u251c\u2500\u2500 conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 seafile-data\n\u2502 \u2514\u2500\u2500 library-template\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502 \u2514\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502 \u2514\u2500\u2500 seaf-fsck.sh\n\u2502 \u2514\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502 \u2514\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502 \u2514\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-server-8.0.6\n\u251c\u2500\u2500 seahub-data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 avatars\n
The folder seafile-server-latest
is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.
Note
If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases ccnet_db
/ seafile_db
/ seahub_db
for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n
"},{"location":"setup_binary/installation_ce/#setup-memory-cache","title":"Setup Memory Cache","text":"Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis.
MemcachedRedisUse the following commands to install memcached and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n
Add the following configuration to seahub_settings.py
.
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
Redis is supported since version 11.0
Install Redis with package installers in your OS.
refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py
.
Seafile's config files as created by the setup script are prepared for Seafile running behind a reverse proxy.
To access Seafile's web interface and to create working sharing links without a reverse proxy, you need to modify two configuration files in /opt/seafile/conf
:
seahub_settings.py
(if you use 9.0.x): Add port 8000 to the SERVICE_URL
(i.e., SERVICE_URL = 'http://1.2.3.4:8000/').ccnet.conf
(if you use 8.0.x or 7.1.x): Add port 8000 to the SERVICE_URL
(i.e., SERVICE_URL = http://1.2.3.4:8000/).gunicorn.conf.py
: Change the bind to \"0.0.0.0:8000\" (i.e., bind = \"0.0.0.0:8000\")Run the following commands in /opt/seafile/seafile-server-latest
:
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\n
Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password.
Now you can access Seafile via the web interface at the host address and port 8000 (e.g., http://1.2.3.4:8000)
Warning
On CentOS, the firewall blocks traffic on port 8000 by default.
"},{"location":"setup_binary/installation_ce/#stopping-and-restarting-seafile-and-seahub","title":"Stopping and Restarting Seafile and Seahub","text":""},{"location":"setup_binary/installation_ce/#stopping","title":"Stopping","text":"./seahub.sh stop # stops seahub\n./seafile.sh stop # stops seaf-server\n
"},{"location":"setup_binary/installation_ce/#restarting","title":"Restarting","text":"# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh restart\n./seahub.sh restart\n
"},{"location":"setup_binary/installation_ce/#enabling-https","title":"Enabling HTTPS","text":"It is strongly recommended to switch from unencrypted HTTP (via port 8000) to encrypted HTTPS (via port 443).
This manual provides instructions for enabling HTTPS for the two most popular web servers and reverse proxies:
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
"},{"location":"setup_binary/installation_pro/#requirements","title":"Requirements","text":"Seafile PE requires a minimum of 2 cores and 2GB RAM. If elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM.
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners.
"},{"location":"setup_binary/installation_pro/#setup","title":"Setup","text":""},{"location":"setup_binary/installation_pro/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"These instructions assume that MySQL/MariaDB server and client are installed and a MySQL/MariaDB root user can authenticate using the mysql_native_password plugin.
"},{"location":"setup_binary/installation_pro/#installing-prerequisites","title":"Installing prerequisites","text":"Seafile 9.0.xSeafile 10.0.xSeafile 11.0.x Ubuntu 20.04/Debian 10/Ubuntu 18.04Centos 8apt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\npip3 install --timeout=3600 django==3.2.* future mysqlclient pymysql Pillow pylibmc \\ \ncaptcha jinja2 sqlalchemy==1.4.3 psd-tools django-pylibmc django-simple-captcha pycryptodome==3.12.0 cffi==1.14.0 lxml\n
sudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==3.2.* Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient pycryptodome==3.12.0 cffi==1.14.0 lxml\n
Ubuntu 22.04/Ubuntu 20.04/Debian 11/Debian 10 apt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n
Ubuntu 22.04/Ubuntu 20.04/Debian 11/Debian 10Debian 12Ubuntu 24.04 with virtual env # on (on , it is almost the same)\napt-get update\napt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap libmysqlclient-dev ldap-utils libldap2-dev dnsutils\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
sudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n
"},{"location":"setup_binary/installation_pro/#installing-java-runtime-environment","title":"Installing Java Runtime Environment","text":"Java Runtime Environment (JRE) is no longer needed in Seafile version 12.0.
"},{"location":"setup_binary/installation_pro/#creating-the-programm-directory","title":"Creating the programm directory","text":"The standard directory for Seafile's program files is /opt/seafile
. Create this directory and change into it:
mkdir /opt/seafile\ncd /opt/seafile\n
The program directory can be changed. The standard directory /opt/seafile
is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly.
Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
adduser seafile\n
Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/installation_pro/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"Save the license file in Seafile's programm directory /opt/seafile
. Make sure that the name is seafile-license.txt
. (If the file has a different name or cannot be read, Seafile PE will not start.)
The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free.
Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 8.0.4 as an example):
The former is suitable for installation on Ubuntu/Debian servers, the latter for CentOS servers.
Download the install package using wget (replace the x.x.x with the version you wish to download):
# Debian/Ubuntu\nwget -O 'seafile-pro-server_x.x.x_x86-64_Ubuntu.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n
We use Seafile version 8.0.4 as an example in the remainder of these instructions.
"},{"location":"setup_binary/installation_pro/#uncompressing-the-package","title":"Uncompressing the package","text":"The install package is downloaded as a compressed tarball which needs to be uncompressed.
Uncompress the package using tar:
# Debian/Ubuntu\ntar xf seafile-pro-server_8.0.4_x86-64_Ubuntu.tar.gz\n
Now you have:
$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-pro-server_8.0.4_x86-64.tar.gz\n
Tip
The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 8.0.4 as an example, the names are as follows:
seafile-server_8.0.4_x86-86.tar.gz
; uncompressing into folder seafile-server-8.0.4
seafile-pro-server_8.0.4_x86-86.tar.gz
; uncompressing into folder seafile-pro-server-8.0.4
The setup process of Seafile PE is the same as the Seafile CE. See Installation of Seafile Server Community Edition with MySQL/MariaDB.
After the successful completition of the setup script, the directory layout of Seafile PE looks as follows (some folders only get created after the first start, e.g. logs
):
For Seafile 7.1.x and later
$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt # license file\n\u251c\u2500\u2500 ccnet \n\u251c\u2500\u2500 conf # configuration files\n\u2502 \u2514\u2500\u2500 ccnet.conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 __pycache__\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafevents.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 logs # log files\n\u251c\u2500\u2500 pids # process id files\n\u251c\u2500\u2500 pro-data # data specific for Seafile PE\n\u251c\u2500\u2500 seafile-data # object database\n\u251c\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-pro-server-8.0.4\n\u251c\u2500\u2500 seahub-data\n \u2514\u2500\u2500 avatars # user avatars\n
"},{"location":"setup_binary/installation_pro/#setup-memory-cache","title":"Setup Memory Cache","text":"Memory cache is mandatory for pro edition. You may use Memcached or Reids as cache server.
MemcachedRedisUse the following commands to install memcached and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n
Add the following configuration to seahub_settings.py
.
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
Redis is supported since version 11.0
Install Redis with package installers in your OS.
refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py
.
You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies:
Run the following commands in /opt/seafile/seafile-server-latest
:
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\n
Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password.
Now you can access Seafile via the web interface at the host address (e.g., http://1.2.3.4:80).
"},{"location":"setup_binary/installation_pro/#enabling-full-text-search","title":"Enabling full text search","text":"Seafile uses the indexing server ElasticSearch to enable full text search.
"},{"location":"setup_binary/installation_pro/#deploying-elasticsearch","title":"Deploying ElasticSearch","text":"Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs.
Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0, 11.0, 12.0 only supports ElasticSearch 8.x.
We use ElasticSearch version 7.16.2 as an example in this section. Version 7.16.2 and newer version have been successfully tested with Seafile.
Pull the Docker image:
sudo docker pull elasticsearch:7.16.2\n
Create a folder for persistent data created by ElasticSearch and change its permission:
sudo mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Now start the ElasticSearch container using the docker run command:
sudo docker run -d \\\n--name es \\\n-p 9200:9200 \\\n-e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" \\\n-e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" \\\n--restart=always \\\n-v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data \\\n-d elasticsearch:8.15.0\n
Security notice
We sincerely thank Mohammed Adel of Safe Decision Co., for the suggestion of this notice.
By default, Elasticsearch will only listen on 127.0.0.1
, but this rule may become invalid after Docker exposes the service port, which will make your Elasticsearch service vulnerable to attackers accessing and extracting sensitive data due to exposure to the external network. We recommend that you manually configure the Docker firewall, such as
sudo iptables -A INPUT -p tcp -s <your seafile server ip> --dport 9200 -j ACCEPT\nsudo iptables -A INPUT -p tcp --dport 9200 -j DROP\n
The above command will only allow the host where your Seafile service is located to connect to Elasticsearch, and other addresses will be blocked. If you deploy Elasticsearch based on binary packages, you need to refer to the official document to set the address that Elasticsearch binds to.
"},{"location":"setup_binary/installation_pro/#modifying-seafevents","title":"Modifying seafevents","text":"Add the following configuration to seafevents.conf
:
[INDEX FILES]\nes_host = your elasticsearch server's IP # IP address of ElasticSearch host\n # use 127.0.0.1 if deployed on the same server\nes_port = 9200 # port of ElasticSearch host\ninterval = 10m # frequency of index updates in minutes\nhighlight = fvh # parameter for improving the search performance\n
Finally, restart Seafile:
./seafile.sh restart && ./seahub.sh restart \n
"},{"location":"setup_binary/memcached_mariadb_cluster/","title":"Setup Memcached Cluster and MariaDB Galera Cluster","text":"For high availability, it is recommended to set up a memcached cluster and MariaDB Galera cluster for Seafile cluster. This documentation will provide information on how to do this with 3 servers. You can either use 3 dedicated servers or use the 3 Seafile server nodes.
"},{"location":"setup_binary/memcached_mariadb_cluster/#setup-memcached-cluster","title":"Setup Memcached Cluster","text":"Seafile servers share session information within memcached. So when you set up a Seafile cluster, there needs to be a memcached server (cluster) running.
The simplest way is to use a single-node memcached server. But when this server fails, some functions in the web UI of Seafile cannot work. So for HA, it's usually desirable to have more than one memcached servers.
We recommend to setup two independent memcached servers, in active/standby mode. A floating IP address (or Virtual IP address in some context) is assigned to the current active node. When the active node goes down, Keepalived will migrate the virtual IP to the standby node. So you actually use a single node memcahced, but use Keepalived (or other alternatives) to provide high availability.
After installing memcahced on each server, you need to make some modification to the memcached config file.
# Under Ubuntu\nvi /etc/memcached.conf\n\n# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default\n# Note that the daemon will grow to this size, but does not start out holding this much\n# memory\n# -m 64\n-m 256\n\n# Specify which IP address to listen on. The default is to listen on all IP addresses\n# This parameter is one of the only security measures that memcached has, so make sure\n# it's listening on a firewalled interface.\n-l 0.0.0.0\n\nservice memcached restart\n
Please configure memcached to start on system startup
Install and configure Keepalived.
# For Ubuntu\nsudo apt-get install keepalived -y\n
Modify keepalived config file /etc/keepalived/keepalived.conf
.
On active node
cat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface ens33\n virtual_router_id 51\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\n
On standby node
cat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface ens33\n virtual_router_id 51\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\n
Please adjust the network device names accordingly. virtual_ipaddress is the floating IP address in use
"},{"location":"setup_binary/memcached_mariadb_cluster/#setup-mariadb-cluster","title":"Setup MariaDB Cluster","text":"MariaDB cluster helps you to remove single point of failure from the cluster architecture. Every update in the database cluster is synchronously replicated to all instances.
You can choose between two different setups:
We refer to the documentation from MariaDB team:
Tip
Seafile doesn't use read/write isolation techniques. So you don't need to setup read and write pools.
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/","title":"Migrate From SQLite to MySQL","text":"Note
The tutorial is only related to Seafile CE edition.
First make sure the python module for MySQL is installed. On Ubuntu/Debian, use sudo apt-get install python-mysqldb
or sudo apt-get install python3-mysqldb
to install it.
Steps to migrate Seafile from SQLite to MySQL:
Stop Seafile and Seahub.
Download sqlite2mysql.sh and sqlite2mysql.py to the top directory of your Seafile installation path. For example, /opt/seafile
.
Run sqlite2mysql.sh
:
chmod +x sqlite2mysql.sh\n./sqlite2mysql.sh\n
This script will produce three files: ccnet-db.sql
, seafile-db.sql
, seahub-db.sql
.
Then create 3 databases ccnet_db, seafile_db, seahub_db and seafile user.
mysql> create database ccnet_db character set = 'utf8';\nmysql> create database seafile_db character set = 'utf8';\nmysql> create database seahub_db character set = 'utf8';\n
Import ccnet data to MySql.
mysql> use ccnet_db;\nmysql> source ccnet-db.sql;\n
Import seafile data to MySql.
mysql> use seafile_db;\nmysql> source seafile-db.sql;\n
Import seahub data to MySql.
mysql> use seahub_db;\nmysql> source seahub-db.sql;\n
ccnet.conf
has been removed since Seafile 12.0
Modify configure files\uff1aAppend following lines to ccnet.conf:
[Database]\nENGINE=mysql\nHOST=127.0.0.1\nPORT = 3306\nUSER=root\nPASSWD=root\nDB=ccnet_db\nCONNECTION_CHARSET=utf8\n
Use 127.0.0.1
, don't use localhost
Replace the database section in seafile.conf
with following lines:
[database]\ntype=mysql\nhost=127.0.0.1\nport = 3306\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\n
Append following lines to seahub_settings.py
:
DATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'USER' : 'root',\n 'PASSWORD' : 'root',\n 'NAME' : 'seahub_db',\n 'HOST' : '127.0.0.1',\n 'PORT': '3306',\n # This is only needed for MySQL older than 5.5.5.\n # For MySQL newer than 5.5.5 INNODB is the default already.\n 'OPTIONS': {\n \"init_command\": \"SET storage_engine=INNODB\",\n }\n }\n}\n
Restart seafile and seahub
Note
User notifications will be cleared during migration due to the slight difference between MySQL and SQLite, if you only see the busy icon when click the notitfications button beside your avatar, please remove user_notitfications
table manually by:
use seahub_db;\ndelete from notifications_usernotification;\n
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#faq","title":"FAQ","text":""},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#encountered-errno-150-foreign-key-constraint-is-incorrectly-formed","title":"Encountered errno: 150 \"Foreign key constraint is incorrectly formed\"
","text":"This error typically occurs because the current table being created contains a foreign key that references a table whose primary key has not yet been created. Therefore, please check the database table creation order in the SQL file. The correct order is:
auth_user\nauth_group\nauth_permission\nauth_group_permissions\nauth_user_groups\nauth_user_user_permissions\n
and post_office_emailtemplate\npost_office_email\npost_office_attachment\npost_office_attachment_emails\n
"},{"location":"setup_binary/outline_ce/","title":"Deploying Seafile","text":"We provide two ways to deploy Seafile services. Docker is the recommended way.
Warning
Since version 12.0, binary based deployment for community edition is deprecated and will not be supported in a future release.
There are two ways to deploy Seafile Pro Edition. Since version 8.0, the recommend way to install Seafile Pro Edition is using Docker.
Seafile Professional Edition SOFTWARE LICENSE AGREEMENT
NOTICE: READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE. BY INSTALLING OR USING THE SOFTWARE, YOU AGREE TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS. IF YOU DO NOT AGREE TO THE FOLLOWING TERMS AND CONDITIONS, DO NOT INSTALL OR USE THE SOFTWARE.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#1-definitions","title":"1. DEFINITIONS","text":"\"Seafile Ltd.\" means Seafile Ltd.
\"You and Your\" means the party licensing the Software hereunder.
\"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#2-grant-of-rights","title":"2. GRANT OF RIGHTS","text":""},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#21-general","title":"2.1 General","text":"The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#22-license-provisions","title":"2.2 License Provisions","text":"Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows:
The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#4-ownership","title":"4. OWNERSHIP","text":"You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#5-confidentiality","title":"5. CONFIDENTIALITY","text":"You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#6-disclaimer-of-warranties","title":"6. DISCLAIMER OF WARRANTIES","text":"EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#7-limitation-of-liability","title":"7. LIMITATION OF LIABILITY","text":"YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#8-indemnification","title":"8. INDEMNIFICATION","text":"You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#9-termination","title":"9. TERMINATION","text":"Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#10-updates-and-support","title":"10. UPDATES AND SUPPORT","text":"Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user.
YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS.
"},{"location":"setup_binary/setup_seafile_cluster_with_nfs/","title":"Setup Seafile cluster with NFS","text":"In a Seafile cluster, one common way to share data among the Seafile server instances is to use NFS. You should only share the files objects (located in seafile-data
folder) and user avatars as well as thumbnails (located in seahub-data
folder) on NFS. Here we'll provide a tutorial about how and what to share.
How to setup nfs server and client is beyond the scope of this wiki. Here are few references:
Supposed your seafile server installation directory is /data/haiwen
, after you run the setup script there should be a seafile-data
and seahub-data
directory in it. And supposed you mount the NFS drive on /seafile-nfs
, you should follow a few steps:
seafile-data
and seahub-data
folder to /seafile-nfs
:mv /data/haiwen/seafile-data /seafile-nfs/\nmv /data/haiwen/seahub-data /seafile-nfs/\n
seafile-data
and seahub-data
folder cd /data/haiwen\nln -s /seafile-nfs/seafile-data /data/haiwen/seafile-data\nln -s /seafile-nfs/seahub-data /data/haiwen/seahub-data\n
This way the instances will share the same seafile-data
and seahub-data
folder. All other config files and log files will remain independent.
For example Debian 12
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory.
sudo vim /opt/seafile/run_with_venv.sh\n
The content of the file is:
#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n
make this script executable sudo chmod 755 /opt/seafile/run_with_venv.sh\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n
The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component","title":"Seahub component","text":"sudo vim /etc/systemd/system/seahub.service\n
The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seahub.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"For example Debian 8 through Debian 11, Linux Ubuntu 15.04 and newer
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component_1","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n
The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component_1","title":"Seahub component","text":"Create systemd service file /etc/systemd/system/seahub.service
sudo vim /etc/systemd/system/seahub.service\n
The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seahub.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-cli-client-optional","title":"Seafile cli client (optional)","text":"Create systemd service file /etc/systemd/system/seafile-client.service
You need to create this service file only if you have seafile console client and you want to run it on system boot.
sudo vim /etc/systemd/system/seafile-client.service\n
The content of the file is:
[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"sudo systemctl enable seafile.service\nsudo systemctl enable seahub.service\nsudo systemctl enable seafile-client.service # optional\n
"},{"location":"setup_binary/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"setup_binary/using_logrotate/#how-it-works","title":"How it works","text":"seaf-server support reopenning logfiles by receiving a SIGUR1
signal.
This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly.
"},{"location":"setup_binary/using_logrotate/#default-logrotate-configuration-directory","title":"Default logrotate configuration directory","text":"For Debian, the default directory for logrotate should be /etc/logrotate.d/
Assuming your seaf-server's logfile is setup to /opt/seafile/logs/seafile.log
and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid
:
The configuration for logrotate could be like this:
/opt/seafile/logs/seafile.log\n/opt/seafile/logs/seahub.log\n/opt/seafile/logs/seafdav.log\n/opt/seafile/logs/fileserver-access.log\n/opt/seafile/logs/fileserver-error.log\n/opt/seafile/logs/fileserver.log\n/opt/seafile/logs/file_updates_sender.log\n/opt/seafile/logs/repo_old_file_auto_del_scan.log\n/opt/seafile/logs/seahub_email_sender.log\n/opt/seafile/logs/index.log\n{\n daily\n missingok\n rotate 7\n # compress\n # delaycompress\n dateext\n dateformat .%Y-%m-%d\n notifempty\n # create 644 root root\n sharedscripts\n postrotate\n if [ -f /opt/seafile/pids/seaf-server.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/seaf-server.pid`\n fi\n\n if [ -f /opt/seafile/pids/fileserver.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/fileserver.pid`\n fi\n\n if [ -f /opt/seafile/pids/seahub.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seahub.pid`\n fi\n\n if [ -f /opt/seafile/pids/seafdav.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seafdav.pid`\n fi\n\n find /opt/seafile/logs/ -mtime +7 -name \"*.log*\" -exec rm -f {} \\;\n endscript\n}\n
You can save this file, in Debian for example, at /etc/logrotate.d/seafile
.
There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-5.1.0\n -- seafile-server-6.1.0\n -- ccnet\n -- seafile-data\n
Now upgrade to version 6.1.0.
Shutdown Seafile server if it's running
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n
Check the upgrade scripts in seafile-server-6.1.0 directory.
cd seafile/seafile-server-6.1.0\nls upgrade/upgrade_*\n
You will get a list of upgrade files:
...\nupgrade_5.0_5.1.sh\nupgrade_5.1_6.0.sh\nupgrade_6.0_6.1.sh\n
Start from your current version, run the script(s one by one)
upgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\n
Start Seafile server
cd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works fine, the old version can be removed
rm -rf seafile-server-5.1.0/\n
"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-6.1.0\n -- seafile-server-6.2.0\n -- ccnet\n -- seafile-data\n
Now upgrade to version 6.2.0.
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n
Check the upgrade scripts in seafile-server-6.2.0 directory.
cd seafile/seafile-server-6.2.0\nls upgrade/upgrade_*\n
You will get a list of upgrade files:
...\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\nupgrade/upgrade_6.1_6.2.sh\n
Start from your current version, run the script(s one by one)
upgrade/upgrade_6.1_6.2.sh\n
Start Seafile server
./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works, the old version can be removed
rm -rf seafile-server-6.1.0/\n
"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3.
For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called minor-upgrade.sh
):
cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.sh\n
Start Seafile
If the new version works, the old version can be removed
rm -rf seafile-server-6.2.2/\n
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
Doing maintanence upgrading is simple, you only need to run the script ./upgrade/minor_upgrade.sh
at each node to update the symbolic link.
In the background node, Seahub no longer need to be started. Nginx is not needed too.
The way of how office converter work is changed. The Seahub in front end nodes directly access a service in background node.
"},{"location":"upgrade/upgrade_a_cluster/#for-front-end-nodes","title":"For front-end nodes","text":"seahub_settings.py
OFFICE_CONVERTOR_ROOT = 'http://<ip of node background>'\n\u2b07\ufe0f\nOFFICE_CONVERTOR_ROOT = 'http://<ip of node background>:6000'\n
seafevents.conf
[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\n
"},{"location":"upgrade/upgrade_a_cluster/#for-backend-node","title":"For backend node","text":"seahub_settings.py is not needed. But you can leave it unchanged.
seafevents.conf
[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\n
"},{"location":"upgrade/upgrade_a_cluster/#from-63-to-70","title":"From 6.3 to 7.0","text":"No special upgrade operations.
"},{"location":"upgrade/upgrade_a_cluster/#from-62-to-63","title":"From 6.2 to 6.3","text":"In version 6.2.11, the included Django was upgraded. The memcached configuration needed to be upgraded if you were using a cluster. If you upgrade from a version below 6.1.11, don't forget to change your memcache configuration. If the configuration in your seahub_settings.py
is:
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n }\n}\n\nCOMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
Now you need to change to:
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n },\n 'locmem': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n },\n}\nCOMPRESS_CACHE_BACKEND = 'locmem'\n
"},{"location":"upgrade/upgrade_a_cluster/#from-61-to-62","title":"From 6.1 to 6.2","text":"No special upgrade operations.
"},{"location":"upgrade/upgrade_a_cluster/#from-60-to-61","title":"From 6.0 to 6.1","text":"In version 6.1, we upgraded the included ElasticSearch server. The old server listen on port 9500, new server listen on port 9200. Please change your firewall settings.
"},{"location":"upgrade/upgrade_a_cluster/#from-51-to-60","title":"From 5.1 to 6.0","text":"In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
"},{"location":"upgrade/upgrade_a_cluster/#from-v50-to-v51","title":"From v5.0 to v5.1","text":"Because Django is upgraded to 1.8, the COMPRESS_CACHE_BACKEND should be changed
- COMPRESS_CACHE_BACKEND = 'locmem://'\n + COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v44-to-v50","title":"From v4.4 to v5.0","text":"v5.0 introduces some database schema change, and all configuration files (ccnet.conf, seafile.conf, seafevents.conf, seahub_settings.py) are moved to a central config directory.
Perform the following steps to upgrade:
./upgrade/upgrade_4.4_5.0.sh\n
SEAFILE_SKIP_DB_UPGRADE
environmental variable turned on:SEAFILE_SKIP_DB_UPGRADE=1 ./upgrade/upgrade_4.4_5.0.sh\n
After the upgrade, you should see the configuration files has been moved to the conf/ folder.
conf/\n |__ seafile.conf\n |__ seafevent.conf\n |__ seafdav.conf\n |__ seahub_settings.conf\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v43-to-v44","title":"From v4.3 to v4.4","text":"There are no database and search index upgrade from v4.3 to v4.4. Perform the following steps to upgrade:
v4.3 contains no database table change from v4.2. But the old search index will be deleted and regenerated.
A new option COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache' should be added to seahub_settings.py
The secret key in seahub_settings.py need to be regenerated, the old secret key lack enough randomness.
Perform the following steps to upgrade:
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
Maintanence upgrade only needs to download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Start with docker compose up.
"},{"location":"upgrade/upgrade_a_cluster_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"Migrate your configuration for LDAP and OAuth according to here
"},{"location":"upgrade/upgrade_a_cluster_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"If you are using with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_a_cluster_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"If you are using with ElasticSearch, follow the upgrading manual on how to update the configuration.
"},{"location":"upgrade/upgrade_docker/","title":"Upgrade Seafile Docker","text":"For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
For major version upgrade, like from 10.0 to 11.0, see instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-110-to-120","title":"Upgrade from 11.0 to 12.0","text":"Note: If you have a large number of Activity
in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
From Seafile Docker 12.0, we recommend that you use .env
and seafile-server.yml
files for configuration.
mv docker-compose.yml docker-compose.yml.bak\n
"},{"location":"upgrade/upgrade_docker/#download-seafile-120-docker-files","title":"Download Seafile 12.0 Docker files","text":"Download .env, seafile-server.yml and caddy.yml, and modify .env file according to the old configuration in docker-compose.yml.bak
wget -O .env https://manual.seafile.com/12.0/docker/ce/env\nwget https://manual.seafile.com/12.0/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME
The database name of ccnet ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
The database name of seafile seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
The database name of seahub seahub_db
JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
wget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
SEAFILE_ELASTICSEARCH_VOLUME
(Only valid for Seafile PE) The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
Note
seafile.conf
).INIT_SEAFILE_MYSQL_ROOT_PASSWORD
, INIT_SEAFILE_ADMIN_EMAIL
, INIT_SEAFILE_ADMIN_PASSWORD
), you can remove it in the .env
file.SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80.
Backup the original seafile.nginx.conf file:
cp seafile.nginx.conf seafile.nginx.conf.bak\n
Remove the server listen 80
section:
#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://seafile.example.com$request_uri? permanent;\n# }\n#}\n
Change server listen 443
to 80
:
server {\n#listen 443 ssl;\nlisten 80;\n\n# ssl_certificate /shared/ssl/pkg.seafile.top.crt;\n# ssl_certificate_key /shared/ssl/pkg.seafile.top.key;\n\n# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;\n\n ...\n
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-notification-server","title":"Upgrade notification server","text":"If you has deployed the notification server. The Notification Server is now moved to its own Docker image. You need to redeploy it according to Notification Server document
"},{"location":"upgrade/upgrade_docker/#upgrade-seadoc-from-08-to-10-for-seafile-v120","title":"Upgrade SeaDoc from 0.8 to 1.0 for Seafile v12.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following steps:
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_docker/#remove-seadoc-configs-in-seafilenginxconf-file","title":"Remove SeaDoc configs in seafile.nginx.conf file","text":"If you have deployed SeaDoc older version, you should remove /sdoc-server/
, /socket.io
configs in seafile.nginx.conf file.
# location /sdoc-server/ {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# if ($request_method = 'OPTIONS') {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# return 204;\n# }\n# proxy_pass http://sdoc-server:7070/;\n# proxy_redirect off;\n# proxy_set_header Host $host;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header X-Forwarded-Host $server_name;\n# proxy_set_header X-Forwarded-Proto $scheme;\n# client_max_body_size 100m;\n# }\n# location /socket.io {\n# proxy_pass http://sdoc-server:7070;\n# proxy_http_version 1.1;\n# proxy_set_header Upgrade $http_upgrade;\n# proxy_set_header Connection 'upgrade';\n# proxy_redirect off;\n# proxy_buffers 8 32k;\n# proxy_buffer_size 64k;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header Host $http_host;\n# proxy_set_header X-NginX-Proxy true;\n# }\n
"},{"location":"upgrade/upgrade_docker/#deploy-a-new-seadoc-server","title":"Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc with Seafile.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify
...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:10.0-latest\n ...\n ...\n
to
service:\n ...\n seafile:\n image: seafileltd/seafile-mc:11.0-latest\n ...\n ...\n
It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions:
What's more, you have to migrate configuration for LDAP and OAuth according to here
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#lets-encrypt-ssl-certificate","title":"Let's encrypt SSL certificate","text":"Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate.
If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting.
mv /opt/seafile/shared/ssl /opt/seafile/shared/ssl-bak\n\nmv /opt/seafile/shared/nginx/conf/seafile.nginx.conf /opt/seafile/shared/nginx/conf/seafile.nginx.conf.bak\n
Starting the new container will automatically apply a certificate.
docker compose down\ndocker compose up -d\n
Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect.
docker exec seafile nginx -s reload\n
A cron job inside the container will automatically renew the certificate.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-71-to-80","title":"Upgrade from 7.1 to 8.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-70-to-71","title":"Upgrade from 7.0 to 7.1","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_10.0.x/#enable-notification-server","title":"Enable notification server","text":"The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits:
The notification server works with Seafile syncing client 9.0+ and drive client 3.0+.
Please follow the document to enable notification server
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#memcached-section-in-the-seafileconf-pro-edition-only","title":"Memcached section in the seafile.conf (pro edition only)","text":"If you use storage backend or cluster, make sure the memcached section is in the seafile.conf.
Since version 10.0, all memcached options are consolidated to the one below.
Modify the seafile.conf:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#saml-sso-change-pro-edition-only","title":"SAML SSO change (pro edition only)","text":"The configuration for SAML SSO in Seafile is greatly simplified. Now only three options are needed:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
Please check the new document on SAML SSO
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#rate-control-in-role-settings-pro-edition-only","title":"Rate control in role settings (pro edition only)","text":"Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps:
seahub_settings.py
.ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n ...\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n ...\n },\n 'guest': {\n ...\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n ...\n },\n}\n
seafile-server-latest
directory to make the configuration take effect../seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function.
Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file.
You can use the following command to query the current size of each shard to determine the best number of shards for you:
curl 'http{s}://<es IP>:9200/_cat/shards/repofiles?v'\n
The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation.
Modify the seafevents.conf:
[INDEX FILES]\n...\nshards = 10 # default is 5\n...\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
For Debian 11
su pip3 install future==0.18.* mysqlclient==2.1.* pillow==9.3.* captcha==0.4 django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#upgrade-to-100x","title":"Upgrade to 10.0.x","text":"Stop Seafile-9.0.x server.
Start from Seafile 10.0.x, run the script:
upgrade/upgrade_9.0_10.0.sh\n
If you are using pro edtion, modify memcached option in seafile.conf and SAML SSO configuration if needed.
You can choose one of the methods to upgrade your index data.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-one-reindex-the-old-index-data","title":"Method one, reindex the old index data","text":"1. Download Elasticsearch image:
docker pull elasticsearch:7.17.9\n
Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Start ES docker image:
sudo docker run -d --name es-7.17 -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.17.9\n
PS: ES_JAVA_OPTS
can be adjusted according to your need.
2. Create an index with 8.x compatible mappings:
# create repo_head index\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8?pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n }\n }\n }\n}'\n\n# create repofiles index, number_of_shards is the number of shards, here is set to 5, you can also modify it to the most suitable number of shards\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/?pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : \"5\",\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n
3. Set the refresh_interval
to -1
and the number_of_replicas
to 0
for efficient reindex:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n
4. Use the reindex API to copy documents from the 7.x index into the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repo_head\"\n },\n \"dest\": {\n \"index\": \"repo_head_8\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repofiles\"\n },\n \"dest\": {\n \"index\": \"repofiles_8\"\n }\n}'\n
5. Use the following command to check if the reindex task is complete:
# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\n
6. Reset the refresh_interval
and number_of_replicas
to the values used in the old index:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n
7. Wait for the elasticsearch status to change to green
(or yellow
if it is a single node).
curl 'http{s}://{es server IP}:9200/_cluster/health?pretty'\n
8. Use the aliases API delete the old index and add an alias with the old index name to the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"repo_head_8\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"repofiles_8\", \"alias\": \"repofiles\"}}\n ]\n}'\n
9. Deactivate the 7.17 container, pull the 8.x image and run:
$ docker stop es-7.17\n\n$ docker rm es-7.17\n\n$ docker pull elasticsearch:8.6.2\n\n$ sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.6.2\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"1. Pull Elasticsearch image:
docker pull elasticsearch:8.5.3\n
Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Start ES docker image:
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.5.3\n
2. Modify the seafevents.conf:
[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\n
Restart Seafile server:
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\n
3. Delete old index data
rm -rf /opt/seafile-elasticsearch/data/*\n
4. Create new index data:
$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the seafevents.conf
file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update
.
2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server.
3. Then deactivate the old backend node and the old version of Elasticsearch.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/","title":"Upgrade notes for 11.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-of-user-identity","title":"Change of user identity","text":"Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID.
Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table.
Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#reimplementation-of-ldap-integration","title":"Reimplementation of LDAP Integration","text":"Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase.
LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users.
Benefits of this new implementation:
You need to run migrate_ldapusers.py
script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)
If you use OAuth authentication, the configuration need to be changed a bit.
If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of DISABLE_ADFS_USER_PWD_LOGIN = True
in seahub_settings.py.
Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0.
There are several reasons driving this change:
To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 11.0
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-saml-prerequisites-multi_tenancy-only","title":"New SAML prerequisites (MULTI_TENANCY only)","text":"For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y dnsutils\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#django-csrf-protection-issue","title":"Django CSRF protection issue","text":"Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered.
If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error.
You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem:
CSRF_TRUSTED_ORIGINS = [\"https://<your-domain>\"]\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y python3-dev ldap-utils libldap2-dev\n\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* sqlalchemy==2.0.18 captcha==0.5.* django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"upgrade/upgrade_10.0_11.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows:
# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n
The following configuration items are only for Pro Edition:
# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set SSO_LDAP_USE_SAME_UID = True
:
SSO_LDAP_USE_SAME_UID = True\n
Note, here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR
(not LDAP_UID_ATTR
), in ADFS it is uid
attribute. You need make sure you use the same attribute for the two settings.
Run the following script to migrate users in LDAPImported
to EmailUsers
cd <install-path>/seafile-server-latest\npython3 migrate_ldapusers.py\n
For Seafile docker
docker exec -it seafile /usr/bin/python3 /opt/seafile/seafile-server-latest/migrate_ldapusers.py\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows:
# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\")
. You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/","title":"Upgrade notes for 12.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
Seafile version 12.0 has following major changes:
Configuration changes:
.env
file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env
to be consistant with docker based installation..env
file.ccnet.conf
is removed. Some of its configuration items are moved from .env
file, others are read from items in seafile.conf
with same name.Other changes:
Breaking changes
Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 12.0
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 22.04/24.04
sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.*\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
Note
If you has deployed the Notification Server. The Notification Server should be re-deployed with the same version as Seafile server.
For example:
You can modify .env
in your Notification Server host to re-deploy:
NOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:12.0-latest\n
Restart Notification Server:
docker compose restart\n
Note: If you have a large number of Activity
in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
upgrade/upgrade_11.0_12.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#3-create-the-env-file-in-conf-directory","title":"3) Create the .env
file in conf/ directory","text":"conf/.env
JWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Note: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1
Since seafile 12.0, we use docker to deploy the notification server. Please follow the document of notification server.
Note
Notification server is designed to be work with Docker based deployment. To make it work with Seafile binary package on the same server is, you will need to add Nginx rules for notification server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-seadoc-from-08-to-10","title":"Upgrade SeaDoc from 0.8 to 1.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps:
SeaDoc and Seafile binary package
Deploying SeaDoc and Seafile binary package on the same server is no longer officially supported. You will need to add Nginx rules for SeaDoc server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#delete-sdoc_db","title":"Delete sdoc_db","text":"From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#deploy-a-new-seadoc-server","title":"Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#faq","title":"FAQ","text":"We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/","title":"Upgrade notes for 7.1.x","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#important-release-changes","title":"Important release changes","text":"From 7.1.0 version, Seafile will depend on the Python 3 and is\u00a0not\u00a0compatible\u00a0with\u00a0Python\u00a02.
Therefore you cannot upgrade directly from Seafile 6.x.x to 7.1.x.
If your current version of Seafile is not 7.0.x, you must first download the 7.0.x installation package and upgrade to 7.0.x before performing the subsequent operations.
To support both Python 3.6 and 3.7, we no longer bundle python libraries with Seafile package. You need to install most of the libraries by your own as bellow.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#deploy-python3","title":"Deploy Python3","text":"Note, you should install Python libraries system wide using root user or sudo mode.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#seafile-ce","title":"Seafile-CE","text":"sudo apt-get install python3 python3-setuptools python3-pip memcached libmemcached-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#seafile-pro","title":"Seafile-Pro","text":"apt-get install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#upgrade-to-71x","title":"Upgrade to 7.1.x","text":"upgrade/upgrade_7.0_7.1.sh\n
rm -rf /tmp/seahub_cache # Clear the Seahub cache files from disk.\n# If you are using the Memcached service, you need to restart the service to clear the Seahub cache.\nsystemctl restart memcached\n
After Seafile 7.1.x, Seafdav does not support Fastcgi, only Wsgi.
This means that if you are using Seafdav functionality and have deployed Nginx or Apache reverse proxy. You need to change Fastcgi to Wsgi.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#for-nginx","title":"For Nginx","text":"For Seafdav, the configuration of Nginx is as follows:
.....\n location /seafdav {\n proxy_pass http://127.0.0.1:8080/seafdav;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#for-apache","title":"For Apache","text":"For Seafdav, the configuration of Apache is as follows:
......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#builtin-office-file-preview","title":"Builtin office file preview","text":"The implementation of builtin office file preview has been changed. You should update your configuration according to:
https://download.seafile.com/published/seafile-manual/deploy_pro/office_documents_preview.md#user-content-Version%207.1+
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#if-you-are-using-ceph-backend","title":"If you are using Ceph backend","text":"If you are using Ceph storage backend, you need to install new python library.
On Debian/Ubuntu (Seafile 7.1+):
sudo apt-get install python3-rados\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#login-page-customization","title":"Login Page Customization","text":"If you have customized the login page or other html pages, as we have removed some old javascript libraries, your customized pages may not work anymore. Please try to re-customize based on the newest version.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#user-name-encoding-issue-with-shibboleth-login","title":"User name encoding issue with Shibboleth login","text":"Note, the following patch is included in version pro-7.1.8 and ce-7.1.5 already.
We have two customers reported that after upgrading to version 7.1, users login via Shibboleth single sign on have a wrong name if the name contains a special character. We suspect it is a Shibboleth problem as it does not sending the name in UTF-8 encoding to Seafile. (https://issues.shibboleth.net/jira/browse/SSPCPP-2)
The solution is to modify the code in seahub/thirdpart/shibboleth/middleware.py:
158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname\n\nto \n\n158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname.encode(\"iso-8859-1\u201d).decode('utf8')\n
If you have this problem too, please let us know.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#faq","title":"FAQ","text":""},{"location":"upgrade/upgrade_notes_for_7.1.x/#sql-error-during-upgrade","title":"SQL Error during upgrade","text":"The upgrade script will try to create a missing table and remove an used index. The following SQL errors are jus warnings and can be ignored:
[INFO] updating seahub database...\n/opt/seafile/seafile-server-7.1.1/seahub/thirdpart/pymysql/cursors.py:170: Warning: (1050, \"Table 'base_reposecretkey' already exists\")\n result = self._query(query)\n[WARNING] Failed to execute sql: (1091, \"Can't DROP 'drafts_draft_origin_file_uuid_7c003c98_uniq'; check that column/key exists\")\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#internal-server-error-after-upgrade-to-version-71","title":"Internal server error after upgrade to version 7.1","text":"Please check whether the seahub process is running in your server. If it is running, there should be an error log in seahub.log for internal server error.
If seahub process is not running, you can modify\u00a0conf/gunicorn.conf, change\u00a0daemon = True
\u00a0to\u00a0daemon = False
\u00a0, then run ./seahub.sh again. If there are missing Python dependencies, the error will be reported in the terminal.
The most common issue is that you use an old memcache configuration that depends on python-memcache. The new way is
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache'\n
The old way is
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/","title":"Upgrade notes for 8.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#important-release-changes","title":"Important release changes","text":"From 8.0, ccnet-server component is removed. But ccnet.conf is still needed.
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#install-new-python-libraries","title":"Install new Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
apt-get install libmysqlclient-dev\n\nsudo pip3 install -U future mysqlclient sqlalchemy==1.4.3\n
apt-get install default-libmysqlclient-dev \n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\n
yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future\nsudo pip3 install mysqlclient==2.0.1 sqlalchemy==1.4.3\n
yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#change-shibboleth-setting","title":"Change Shibboleth Setting","text":"If you are using Shibboleth and have configured EXTRA_MIDDLEWARE_CLASSES
EXTRA_MIDDLEWARE_CLASSES = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\n
please change it to EXTRA_MIDDLEWARE
EXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\n
As support for old-style middleware using settings.MIDDLEWARE_CLASSES
is removed since django 2.0.
Start from Seafile 7.1.x, run the script:
upgrade/upgrade_7.1_8.0.sh\n
Start Seafile-8.0.x server.
These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#important-release-changes","title":"Important release changes","text":"9.0 version includes following major changes:
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
sudo pip3 install pycryptodome==3.12.0 cffi==1.14.0\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"Start from Seafile 9.0.x, run the script:
upgrade/upgrade_8.0_9.0.sh\n
Start Seafile-9.0.x server.
If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows
Download ElasticSearch image
docker pull elasticsearch:7.16.2\n
Create a new folder to store ES data and give the folder permissions
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n
Delete old index data
rm -rf /opt/seafile/pro-data/search/data/*\n
Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP (use 127.0.0.1 if deployed locally)\nes_port = 9200\n
Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps
The detailed process is as follows
Download ElasticSearch image:
docker pull elasticsearch:7.16.2\n
PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows
mkdir -p /opt/seafile-elasticsearch/data \n
Move original data to the new folder and give the folder permissions
mv /opt/seafile/pro-data/search/data/* /opt/seafile-elasticsearch/data/\nchmod -R 777 /opt/seafile-elasticsearch/data/\n
Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n
Note:ES_JAVA_OPTS
can be adjusted according to your need.
Create an index with 7.x compatible mappings.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head?include_type_name=false&pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"text\",\n \"index\" : false\n }\n }\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/?include_type_name=false&pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : 5,\n \"number_of_replicas\" : 1,\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n
Set the refresh_interval
to -1
and the number_of_replicas
to 0
for efficient reindexing:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n
Use the reindex API to copy documents from the 5.x index into the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repo_head\",\n \"type\": \"repo_commit\"\n },\n \"dest\": {\n \"index\": \"new_repo_head\",\n \"type\": \"_doc\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repofiles\",\n \"type\": \"file\"\n },\n \"dest\": {\n \"index\": \"new_repofiles\",\n \"type\": \"_doc\"\n }\n}'\n
Reset the refresh_interval
and number_of_replicas
to the values used in the old index.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n
Wait for the index status to change to green
.
curl http{s}://{es server IP}:9200/_cluster/health?pretty\n
Use the aliases API delete the old index and add an alias with the old index name to the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"new_repo_head\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"new_repofiles\", \"alias\": \"repofiles\"}}\n ]\n}'\n
After reindex, modify the configuration in Seafile.
Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP\nes_port = 9200\n
Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update
, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.
Seafile is an open source cloud storage system for file sync, share and document collaboration. SeaDoc is an extension of Seafile that providing a lightweight online collaborative document feature.
"},{"location":"#license","title":"LICENSE","text":"The different components of Seafile project are released under different licenses:
As the system admin, you can enter the admin panel by click System Admin
in the popup of avatar.
Backup and recovery:
Recover corrupt files after server hard shutdown or system crash:
You can run Seafile GC to remove unused files:
When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries.
"},{"location":"administration/account/#how-to-change-a-users-id","title":"How to change a user's ID","text":"Since version 11.0, if you need to change a user's external ID, you can manually modify database table social_auth_usersocialauth
to map the new external ID to internal ID.
Administrator can reset password for a user in \"System Admin\" page.
In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email.
"},{"location":"administration/account/#forgot-admin-account-or-password","title":"Forgot Admin Account or Password?","text":"You may run reset-admin.sh
script under seafile-server-latest directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account.
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
Under the seafile-server-latest directory, run ./seahub.sh python-env python seahub/manage.py check_user_quota
, when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.
In the Pro Edition, Seafile offers four audit logs in system admin panel:
The logging feature is turned off by default before version 6.0. Add the following option to seafevents.conf
to turn it on:
[Audit]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n
The audit log data is being saved in seahub_db
.
There are generally two parts of data to backup
There are 3 databases:
The backup is a two step procedure:
The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data.
We assume your seafile data directory is in /opt/seafile
for binary package based deployment (or /opt/seafile-data
for docker based deployment). And you want to backup to /backup
directory. The /backup
can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup
directory:
/backup\n---- databases/ contains database backup files\n---- data/ contains backups of the data directory\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"administration/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.
MySQL
Assume your database names are ccnet_db
, seafile_db
and seahub_db
. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.
mysqldump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"The data files are all stored in the /opt/seafile
directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup.
To directly copy the whole data directory,
cp -R /opt/seafile /backup/data/seafile-`date +\"%Y-%m-%d-%H-%M-%S\"`\n
This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed.
If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup.
rsync -az /opt/seafile /backup/data\n
This command backup the data directory to /backup/data/seafile
.
Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance:
/backup/data/seafile
to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile
.Now with the latest valid database backup files at hand, you can restore them.
MySQL
mysql -u[username] -p[password] ccnet_db < ccnet_db.sql.2013-10-19-16-00-05\nmysql -u[username] -p[password] seafile_db < seafile_db.sql.2013-10-19-16-00-20\nmysql -u[username] -p[password] seahub_db < seahub_db.sql.2013-10-19-16-01-05\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"administration/backup_recovery/#structure","title":"Structure","text":"We assume your seafile volumns path is in /opt/seafile-data
. And you want to backup to /backup
directory.
The data files to be backed up:
/opt/seafile-data/seafile/conf # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"administration/backup_recovery/#backing-up-database","title":"Backing up Database","text":"# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"administration/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"rsync -az /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"administration/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"docker cp /backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql\ndocker cp /backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql\ndocker cp /backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql\n\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n
"},{"location":"administration/backup_recovery/#restore-the-seafile-data","title":"Restore the seafile data","text":"cp -R /backup/data/* /opt/seafile-data/seafile/\n
"},{"location":"administration/clean_database/","title":"Clean Database","text":""},{"location":"administration/clean_database/#session","title":"Session","text":"Use the following command to clear expired session records in Seahub database:
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\n
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago:
./seahub.sh python-env python3 seahub/manage.py clean_db_records\n
You can also clean these tables manually if you like as following.
"},{"location":"administration/clean_database/#activity","title":"Activity","text":"Use the following command to clear the activity records:
use seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\nDELETE FROM UserActivity WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#login","title":"Login","text":"Use the following command to clean the login records:
use seahub_db;\nDELETE FROM sysadmin_extra_userloginlog WHERE to_days(now()) - to_days(login_date) > 90;\n
"},{"location":"administration/clean_database/#file-access","title":"File Access","text":"Use the following command to clean the file access records:
use seahub_db;\nDELETE FROM FileAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-update","title":"File Update","text":"Use the following command to clean the file update records:
use seahub_db;\nDELETE FROM FileUpdate WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#permisson","title":"Permisson","text":"Use the following command to clean the permission change audit records:
use seahub_db;\nDELETE FROM PermAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-history","title":"File History","text":"Use the following command to clean the file history records:
use seahub_db;\nDELETE FROM FileHistory WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#clean-outdated-library-data","title":"Clean outdated library data","text":"Since version 6.2, we offer command to clear outdated library records in Seafile database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time.
./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data\n
This command has been improved in version 10.0, including:
It will clear the invalid data in small batch, avoiding consume too much database resource in a short time.
Dry-run mode: if you just want to see how much invalid data can be deleted without actually deleting any data, you can use the dry-run option, e.g.
./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data --dry-run=true\n
"},{"location":"administration/clean_database/#clean-library-sync-tokens","title":"Clean library sync tokens","text":"There are two tables in Seafile db that are related to library sync tokens.
When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query:
delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
xxxx is the UNIX timestamp for the time before which tokens will be deleted.
To be safe, you can first check how many tokens will be removed:
select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"administration/export_report/","title":"Export Report","text":"Since version 7.0.8 pro, Seafile provides commands to export reports via command line.
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
cd seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_traffic_report --date 201906\n
"},{"location":"administration/export_report/#export-user-storage-report","title":"Export User Storage Report","text":"cd seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_storage_report\n
"},{"location":"administration/export_report/#export-file-access-log","title":"Export File Access Log","text":"cd seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"administration/logs/","title":"Logs","text":""},{"location":"administration/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":"On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git).
With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible.
Warning
If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt.
We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments:
cd seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
There are three modes of operation for seaf-fsck:
Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries.
./seaf-fsck.sh\n
If you want to check integrity for specific libraries, just append the library id's as arguments:
./seaf-fsck.sh [library-id1] [library-id2] ...\n
The output looks like:
[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n
The corrupted files and directories are reported.
Sometimes you can see output like the following:
[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n
This means the \"head commit\" (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state.
Tip
If you have many libraries, it's helpful to save the fsck output into a log file for later analysis.
"},{"location":"administration/seafile_fsck/#repairing-corruption","title":"Repairing Corruption","text":"Corruption repair in seaf-fsck basically works in two steps:
Running the following command repairs all the libraries:
./seaf-fsck.sh --repair\n
Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command:
./seaf-fsck.sh --repair [library-id1] [library-id2] ...\n
After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths.
"},{"location":"administration/seafile_fsck/#best-practice-for-repairing-a-library","title":"Best Practice for Repairing a Library","text":"To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files.
When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways:
Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted.
In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process.
To skip checking file contents, add the \"--shallow\" or \"-s\" option to seaf-fsck.
"},{"location":"administration/seafile_fsck/#exporting-libraries-to-file-system","title":"Exporting Libraries to File System","text":"You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system.
The command syntax is
./seaf-fsck.sh --export top_export_path [library-id1] [library-id2] ...\n
The argument top_export_path
is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.
Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped.
"},{"location":"administration/seafile_gc/","title":"Seafile GC","text":"Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server.
To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server.
The GC program cleans up two types of unused blocks:
To see how much garbage can be collected without actually removing any garbage, use the dry-run option:
seaf-gc.sh --dry-run [repo-id1] [repo-id2] ...\n
The output should look like:
[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\n
If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked.
repos have blocks to be removed
Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program.
"},{"location":"administration/seafile_gc/#removing-garbage","title":"Removing Garbage","text":"To actually remove garbage blocks, run without the --dry-run option:
seaf-gc.sh [repo-id1] [repo-id2] ...\n
If libraries ids are specified, only those libraries will be checked for garbage.
As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature:
seaf-gc.sh -r\n
Success
Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected.
"},{"location":"administration/seafile_gc/#removing-fs-objects","title":"Removing FS objects","text":"Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option:
seaf-gc.sh --rm-fs\n
Bug reports
This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug.
"},{"location":"administration/seafile_gc/#using-multiple-threads-in-gc","title":"Using Multiple Threads in GC","text":"You can specify the thread number in GC. By default,
You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries:
seaf-gc.sh -t 20\n
Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output.
"},{"location":"administration/seafile_gc/#run-gc-based-on-library-id-prefix","title":"Run GC based on library ID prefix","text":"Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel.
A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix.
seaf-gc.sh --id-prefix a123\n
"},{"location":"administration/security_features/","title":"Security Questions","text":""},{"location":"administration/security_features/#how-is-the-connection-between-client-and-server-encrypted","title":"How is the connection between client and server encrypted?","text":"Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0).
"},{"location":"administration/security_features/#encrypted-library","title":"Encrypted Library","text":"Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents.
There are a few limitation about this feature:
The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0.
"},{"location":"administration/security_features/#how-does-an-encrypted-library-work","title":"How does an encrypted library work?","text":"When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above).
The encryption procedure is:
The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server.
When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by PBKDF2 algorithm with 1000 iterations of SHA256 hash.
For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server.
"},{"location":"administration/security_features/#why-fileserver-delivers-every-content-to-everybody-knowing-the-content-url-of-an-unshared-private-file","title":"Why fileserver delivers every content to everybody knowing the content URL of an unshared private file?","text":"When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore.
"},{"location":"administration/security_features/#how-does-seafile-store-user-login-password","title":"How does Seafile store user login password?","text":"User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is
PBKDF2SHA256$iterations$salt$hash\n
The record is divided into 4 parts by the $ sign.
To calculate the hash:
PBKDF2(password, salt, iterations)
. The number of iterations is currently 10000.Starting from version 6.0, we added Two-Factor Authentication to enhance account security.
There are two ways to enable this feature:
System admin can tick the check-box at the \"Password\" section of the system settings page, or
just add the following settings to seahub_settings.py
and restart service.
ENABLE_TWO_FACTOR_AUTH = True\nTWO_FACTOR_DEVICE_REMEMBER_DAYS = 30 # optional, default 90 days.\n
After that, there will be a \"Two-Factor Authentication\" section in the user profile page.
Users can use the Google Authenticator app on their smart-phone to scan the QR code.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"Note: Two new options are added in version 4.4, both are in seahub_settings.py
This version contains no database table change.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#449-20160229","title":"4.4.9 (2016.02.29)","text":"LDAP improvements and fixes
New features:
Pro only:
Fixes:
Note: this version contains no database table change from v4.2. But the old search index will be deleted and regenerated.
Note when upgrading from v4.2 and using cluster, a new option COMPRESS_CACHE_BACKEND = 'locmem://'
should be added to seahub_settings.py
About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#430-20150725","title":"4.3.0 (2015.07.25)","text":"Usability improvements
Pro only features:
Others
THUMBNAIL_DEFAULT_SIZE = 24
, instead of THUMBNAIL_DEFAULT_SIZE = '24'
Note: because Seafile has changed the way how office preview work in version 4.2.2, you need to clean the old generated files using the command:
rm -rf /tmp/seafile-office-output/html/\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#424-20150708","title":"4.2.4 (2015.07.08)","text":"In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#421-20150630","title":"4.2.1 (2015.06.30)","text":"Improved account management
Important
New features
Others
Pro only updates
Usability
Security Improvement
Platform
Pro only updates
Updates in community edition too
Important
Small
Pro edition only:
Syncing
Platform
Web
Web
Platform
Web
Platform
Misc
WebDAV
pro.py search --clear
commandPlatform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/changelog-for-seafile-professional-server/#120","title":"12.0","text":"Upgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#1204-beta-to-be-told","title":"12.0.4 beta (to-be-told)","text":".env
file.ccnet.conf
is removed. Some of its configuration items are moved from .env
file, others are read from items in seafile.conf
with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#11016-2024-11-04","title":"11.0.16 (2024-11-04)","text":"Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
SDoc editor 0.6
Major changes
UI Improvements
Pro edition only changes
Other changes
Upgrade
Please check our document for how to upgrade to 10.0.
Note
If you upgrade to version 10.0.18+ from 10.0.16 or below, you need to upgrade the sqlalchemy to version 1.4.44+ if you use binary based installation. Otherwise \"activities\" page will not work.
"},{"location":"changelog/changelog-for-seafile-professional-server/#10018-2024-11-01","title":"10.0.18 (2024-11-01)","text":"This release is for Docker image only
Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10012-2024-01-16","title":"10.0.12 (2024-01-16)","text":"Upgrade
Please check our document for how to upgrade to 9.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#9016-2023-03-22","title":"9.0.16 (2023-03-22)","text":"Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml
to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#901","title":"9.0.1","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#900","title":"9.0.0","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#80","title":"8.0","text":"Upgrade
Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#8017-20220110","title":"8.0.17 (2022/01/10)","text":"Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout
configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#802-20210421","title":"8.0.2 (2021/04/21)","text":"Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/changelog-for-seafile-professional-server/#7122-20210729","title":"7.1.22 (2021/07/29)","text":"Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout
configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#7115-20210318","title":"7.1.15 (2021/03/18)","text":"Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version.
"},{"location":"changelog/changelog-for-seafile-professional-server/#7019-20200907","title":"7.0.19 (2020/09/07)","text":"-Xms1g -Xmx1g
In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf
instead of running ./seahub.sh start <another-port>
.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n
Note, this command should be run while Seafile server is running.
Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3
Version 6.3 add a new option for file search (seafevents.conf
):
[INDEX FILES]\n...\nhighlight = fvh\n...\n
This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option.
"},{"location":"changelog/changelog-for-seafile-professional-server/#6314-20190521","title":"6.3.14 (2019/05/21)","text":"New features
From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start
instead of ./seahub.sh start-fastcgi
The configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6213-2018518","title":"6.2.13 (2018.5.18)","text":"file already exists
error for the first time.per_page
parameter to 10 when search file via api.repo_owner
field to library search web api.ENABLE_REPO_SNAPSHOT_LABEL = True
to turn the feature on)You can follow the document on minor upgrade.
"},{"location":"changelog/changelog-for-seafile-professional-server/#619-20170928","title":"6.1.9 \uff082017.09.28\uff09","text":"Web UI Improvement:
Improvement for admins:
System changes:
ENABLE_WIKI = True
in seahub_settings.py)You can follow the document on minor upgrade.
Special note for upgrading a cluster:
In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
"},{"location":"changelog/changelog-for-seafile-professional-server/#6013-20170508","title":"6.0.13 (2017.05.08)","text":"Improvement for admin
Other
# -*- coding: utf-8 -*-
to seahub_settings.py, so that admin can use non-ascii characters in the file.[Audit]
and [AUDIT]
in seafevent.confPro only features
._
cloud file browser
others
This version has a few bugs. We will fix it soon.
"},{"location":"changelog/client-changelog/#601-20161207","title":"6.0.1 (2016/12/07)","text":"Note: Seafile client now support HiDPI under Windows, you should remove QT_DEVICE_PIXEL_RATIO settings if you had set one previous.
In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly.
"},{"location":"changelog/client-changelog/#406-20150109","title":"4.0.6 (2015/01/09)","text":"In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected.
"},{"location":"changelog/client-changelog/#403-20141203","title":"4.0.3 (2014/12/03)","text":"You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again.
"},{"location":"changelog/client-changelog/#3110-20141113","title":"3.1.10 (2014/11/13)","text":"Note: This version contains a bug that you can't login into your private servers.
1.8.1
1.8.0
1.7.3
1.7.2
1.7.1
1.7.0
1.6.2
1.6.1
1.6.0
1.5.3
1.5.2
1.5.1
1.5.0
S:
because a few programs will automatically try to create files in S:
Note when upgrade to 5.0 from 4.4
You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) (url might deprecated)
In Seafile 5.0, we have moved all config files to folder conf
, including:
If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4.
The 5.0 server is compatible with v4.4 and v4.3 desktop clients.
Common issues (solved) when upgrading to v5.0:
Improve seaf-fsck
Sharing link
[[ Pagename]]
.UI changes:
Config changes:
conf
Trash:
Admin:
Security:
New features:
Fixes:
Usability Improvement
Others
THUMBNAIL_DEFAULT_SIZE = 24
, instead of THUMBNAIL_DEFAULT_SIZE = '24'
Note when upgrade to 4.2 from 4.1:
If you deploy Seafile in a non-root domain, you need to add the following extra settings in seahub_settings.py:
COMPRESS_URL = MEDIA_URL\nSTATIC_URL = MEDIA_URL + '/assets/'\n
"},{"location":"changelog/server-changelog-old/#423-20150618","title":"4.2.3 (2015.06.18)","text":"Usability
Security Improvement
Platform
Important
Small
Important
Small improvements
Syncing
Platform
Web
Web
Platform
Web
Platform
Platform
Web
WebDAV
<a>, <table>, <img>
and a few other html elements in markdown to avoid XSS attack. Platform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
Web
Daemon
Web
Daemon
Web
For Admin
API
Seafile Web
Seafile Daemon
API
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/server-changelog/#120","title":"12.0","text":"Upgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/server-changelog/#1204-beta-2024-11-21","title":"12.0.4 beta (2024-11-21)","text":".env
file.ccnet.conf
is removed. Some of its configuration items are moved from .env
file, others are read from items in seafile.conf
with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/server-changelog/#11012-2024-08-14","title":"11.0.12 (2024-08-14)","text":"Seafile
Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
Seafile
SDoc editor 0.6
Seafile
Seafile
SDoc editor 0.5
Seafile
SDoc editor 0.4
Seafile
SDoc editor 0.3
Seafile
SDoc editor 0.2
Upgrade
Please check our document for how to upgrade to 10.0.
"},{"location":"changelog/server-changelog/#1001-2023-04-11","title":"10.0.1 (2023-04-11)","text":"/accounts/login
redirect by ?next=
parameterNote: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml
to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/server-changelog/#80","title":"8.0","text":"Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/server-changelog/#808-20211206","title":"8.0.8 (2021/12/06)","text":"Feature changes
Progresql support is dropped as we have rewritten the database access code to remove copyright issue.
Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/server-changelog/#715-20200922","title":"7.1.5 (2020/09/22)","text":"Feature changes
In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis.
Public Wikis are now renamed to published libraries.
Upgrade
Just follow our document on major version upgrade. No special steps are needed.
"},{"location":"changelog/server-changelog/#705-20190923","title":"7.0.5 (2019/09/23)","text":"In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf
instead of running ./seahub.sh start <another-port>
.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n
Note, this command should be run while Seafile server is running.
"},{"location":"changelog/server-changelog/#634-20180915","title":"6.3.4 (2018/09/15)","text":"From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start
instead of ./seahub.sh start-fastcgi
The configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/server-changelog/#625-20180123","title":"6.2.5 (2018/01/23)","text":"ENABLE_REPO_SNAPSHOT_LABEL = True
to turn the feature on)If you upgrade from 6.0 and you'd like to use the feature video thumbnail, you need to install ffmpeg package:
# for ubuntu 16.04\napt-get install ffmpeg\npip install pillow moviepy\n\n# for Centos 7\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\nyum -y install ffmpeg ffmpeg-devel\npip install pillow moviepy\n
"},{"location":"changelog/server-changelog/#612-20170815","title":"6.1.2 (2017.08.15)","text":"Web UI Improvement:
Improvement for admins:
System changes:
Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem.
"},{"location":"changelog/server-changelog/#609-20170330","title":"6.0.9 (2017.03.30)","text":"Improvement for admin
# -*- coding: utf-8 -*-
to seahub_settings.py, so that admin can use non-ascii characters in the file.Other
Warning:
Note: when upgrade from 5.1.3 or lower version to 5.1.4+, you need to install python-urllib3 (or python2-urllib3 for Arch Linux) manually:
# for Ubuntu\nsudo apt-get install python-urllib3\n# for CentOS\nsudo yum install python-urllib3\n
"},{"location":"changelog/server-changelog/#514-20160723","title":"5.1.4 (2016.07.23)","text":"Note: downloading multiple files at once will be added in the next release.
Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released.
The config files used in Seafile include:
You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files.
"},{"location":"config/#the-design-of-configure-options","title":"The design of configure options","text":"There are now three places you can config Seafile server:
The web interface has the highest priority. It contains a subset of end-user oriented settings. In practise, you can disable settings via web interface for simplicity.
Environment variables contains system level settings that needed when initialize Seafile server or run Seafile server. Environment variables also have three categories:
The variables in the first category can be deleted after initialization. In the future, we will make more components to read config from environment variables, so that the third category is no longer needed.
"},{"location":"config/admin_roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for administrators. Seafile has four build-in admin roles:
default_admin, has all permissions.
system_admin, can only view system info and config system.
daily_admin, can only view system info, view statistic, manage library/user/group, view user log.
audit_admin, can only view system info and admin log.
All administrators will have default_admin
role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True
.
Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to seahub_settings.py
.
ENABLED_ADMIN_ROLE_PERMISSIONS = {\n 'system_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n },\n 'daily_admin': {\n 'can_view_system_info': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n },\n 'audit_admin': {\n 'can_view_system_info': True,\n 'can_view_admin_log': True,\n },\n 'custom_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n 'can_view_admin_log': True,\n },\n}\n
"},{"location":"config/auth_switch/","title":"Switch authentication type","text":"Seafile Server supports the following external authentication types:
Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases.
Note
Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong!
See more about make a database backup.
"},{"location":"config/auth_switch/#migrating-from-local-user-database-to-external-authentication","title":"Migrating from local user database to external authentication","text":"As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple.
"},{"location":"config/auth_switch/#general-procedure","title":"General procedure","text":"Configure and test the desired external authentication. Note the name of the provider
you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid
from the social_auth_usersocialauth
table. Delete this new, still empty user again.
Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like xxx@auth.local
.
Replace the password hash with an exclamation mark.
Create a new entry in social_auth_usersocialauth
with the xxx@auth.local
, your provider
and the uid
.
The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries.
"},{"location":"config/auth_switch/#example","title":"Example","text":"This example shows how to migrate the user with the username 12ae56789f1e4c8d8e1c31415867317c@auth.local
from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py
with the provider name authentik-oauth
. The uid
of the user inside the Identity Provider is HR12345
.
This is what the database looks like before these commands must be executed:
mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\n
Note
The extra_data
field store user's information returned from the provider. For most providers, the extra_data
field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data
field is NULL
.
Afterwards the databases should look like this:
mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\n
"},{"location":"config/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the social_auth_usersocialauth
table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local
remains the same, you only need to replace the provider
and the uid
.
First, delete the entry in the social_auth_usersocialauth
table that belongs to the particular user.
Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done.
More details about this option will follow soon.
"},{"location":"config/auto_login_seadrive/","title":"Auto Login to SeaDrive on Windows","text":"Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage.
"},{"location":"config/auto_login_seadrive/#technical-details","title":"Technical Details","text":"The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser.
When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism.
The details of Integrated Windows Authentication is described below:
In short:
The Internet Options has to be configured as following:
Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone.
Note
Above configuration requires a reboot to take effect.
Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned.
Note
The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos.
"},{"location":"config/auto_login_seadrive/#auto-login-on-seadrive","title":"Auto Login on SeaDrive","text":"SeaDrive will use the Kerberos login configuration from the Windows Registry under HKEY_CURRENT_USER/SOFTWARE/SeaDrive
.
Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\n
The system wide configuration path is located at HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive
.
SeaDrive can be installed silently with the following command (requires admin privileges):
msiexec /i seadrive.msi /quiet /qn /log install.log\n
"},{"location":"config/auto_login_seadrive/#auto-login-via-group-policy","title":"Auto Login via Group Policy","text":"The configuration of Internet Options : https://docs.microsoft.com/en-us/troubleshoot/browsers/how-to-configure-group-policy-preference-settings
The configuration of Windows Registry : https://thesolving.com/server-room/how-to-deploy-a-registry-key-via-group-policy/
"},{"location":"config/ccnet-conf/","title":"ccnet.conf","text":"Ccnet is the internal RPC framework used by Seafile server and also manages the user database. A few useful options are in ccnet.conf.
ccnet.conf
is removed in version 12.0
Due to ccnet.conf
is removed in version 12.0, the following informaiton is read from .env
file
SEAFILE_MYSQL_DB_USER: The database user, the default is seafile\nSEAFILE_MYSQL_DB_PASSWORD: The database password\nSEAFILE_MYSQL_DB_HOST: The database host\nSEAFILE_MYSQL_DB_CCNET_DB_NAME: The database name for ccnet db, the default is ccnet_db\n
"},{"location":"config/ccnet-conf/#changing-mysql-connection-pool-size","title":"Changing MySQL Connection Pool Size","text":"In version 12.0, the following information is read from the same option in seafile.conf
When you configure ccnet to use MySQL, the default connection pool size is 100, which should be enough for most use cases. You can change this value by adding following options to ccnet.conf:
[Database]\n......\n# Use larger connection pool\nMAX_CONNECTIONS = 200\n
"},{"location":"config/ccnet-conf/#using-encrypted-connections","title":"Using Encrypted Connections","text":"In version 12.0, the following information is read from the same option in seafile.conf
Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options:
[Database]\nUSE_SSL = true\nSKIP_VERIFY = false\nCA_PATH = /etc/mysql/ca.pem\n
When set use_ssl
to true and skip_verify
to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path
. The ca_path
is a trusted CA certificate path for signing MySQL server certificates. When skip_verify
is true, there is no need to add the ca_path
option. The MySQL server certificate won't be verified at this time.
To use ADFS to log in to your Seafile, you need the following components:
A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
You can generate them by:
``` openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n
sudo apt install xmlsec1 sudo pip install cryptography djangosaml2==0.15.0 ### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n
from os import path import saml2 import saml2.saml"},{"location":"config/config_seafile_with_ADFS/#update-following-lines-according-to-your-situation","title":"update following lines according to your situation","text":"CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"config/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":"
ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY,
'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n
}
```
"},{"location":"config/config_seafile_with_ADFS/#config-adfs-server","title":"Config ADFS Server","text":"Relying Party Trust is the connection between Seafile and ADFS.
Log into the ADFS server and open the ADFS management.
Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026.
Select Import data about the relying party published online or one a local network, input https://demo.seafile.com/saml2/metadata/
in the Federation metadata address.
Then Next until Finish.
Add Relying Party Claim Rules
Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain.
Important: Users in Windows domain must have the E-mail value setted.
Right-click on the relying party trust and select Edit Claim Rules...
On the Issuance Transform Rules tab select Add Rules...
Select Send LDAP Attribute as Claims as the claim rule template to use.
Give the claim a name such as LDAP Attributes.
Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address.
Select Finish.
Click Add Rule... again.
Select Transform an Incoming Claim.
Give it a name such as Email to Name ID.
Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1).
The Outgoing claim type is Name ID (this is required in Seafile settings policy 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS
).
the Outgoing name ID format is Email.
Pass through all claim values and click Finish.
https://support.zendesk.com/hc/en-us/articles/203663886-Setting-up-single-sign-on-using-Active-Directory-with-ADFS-and-SAML-Plus-and-Enterprise-
http://wiki.servicenow.com/?title=Configuring_ADFS_2.0_to_Communicate_with_SAML_2.0#gsc.tab=0
https://github.com/rohe/pysaml2/blob/master/src/saml2/saml.py
Note: Subject line may vary between different releases, this is based on Release 2.0.1. Restart Seahub so that your changes take effect.
"},{"location":"config/customize_email_notifications/#user-reset-hisher-password","title":"User reset his/her password","text":"Subject
seahub/seahub/auth/forms.py line:103
Body
seahub/seahub/templates/registration/password_reset_email.html
Note: You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
Body
seahub/seahub/templates/sysadmin/user_add_email.html
Note: You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:368
Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Note: You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:668
Body
seahub/seahub/templates/shared_link_email.html
"},{"location":"config/details_about_file_search/","title":"Details about File Search","text":""},{"location":"config/details_about_file_search/#search-options","title":"Search Options","text":"The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect.
[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
"},{"location":"config/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below.
"},{"location":"config/details_about_file_search/#modify-seafeventsconf","title":"Modifyseafevents.conf
","text":"Deploy in DockerDeploy from binary packages cd /opt/seafile-data/seafile/conf\nnano seafevents.conf\n
cd /opt/seafile/conf\nnano seafevents.conf\n
set index_office_pdf
to true
...\n[INDEX FILES]\n...\nindex_office_pdf=true\n...\n
"},{"location":"config/details_about_file_search/#restart-seafile-server","title":"Restart Seafile server","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
"},{"location":"config/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"config/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":"You can rebuild search index by running:
Deploy in DockerDeploy from binary packagesdocker exec -it seafile bash\ncd /scripts\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
Tip
If this does not work, you can try the following steps:
rm -rf pro-data/search
./pro/pro.py search --update
Create an elasticsearch service on AWS according to the documentation.
Configure the seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
Note
The version of the Python third-party package elasticsearch
cannot be greater than 7.14.0, otherwise the elasticsearch service cannot be accessed: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/samplecode.html#client-compatibility, https://github.com/elastic/elasticsearch-py/pull/1623.
The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search.
To be able to search immediately,
docker exec -it seafile bash\ncd /scripts\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --update\n
"},{"location":"config/details_about_file_search/#encrypted-files-cannot-be-searched","title":"Encrypted files cannot be searched","text":"This is because the server cannot index encrypted files, since they are encrypted.
"},{"location":"config/env/","title":".env","text":"The .env
file will be used to specify the components used by the Seafile-docker instance and the environment variables required by each component. The default contents list in below
COMPOSE_FILE='seafile-server.yml,caddy.yml'\nCOMPOSE_PATH_SEPARATOR=','\n\n\nSEAFILE_IMAGE=seafileltd/seafile-pro-mc:12.0-latest\nSEAFILE_DB_IMAGE=mariadb:10.11\nSEAFILE_MEMCACHED_IMAGE=memcached:1.6.29\nSEAFILE_ELASTICSEARCH_IMAGE=elasticsearch:8.15.0 # pro edition only\nSEAFILE_CADDY_IMAGE=lucaslorentz/caddy-docker-proxy:2.9\n\nSEAFILE_VOLUME=/opt/seafile-data\nSEAFILE_MYSQL_VOLUME=/opt/seafile-mysql/db\nSEAFILE_ELASTICSEARCH_VOLUME=/opt/seafile-elasticsearch/data # pro edition only\nSEAFILE_CADDY_VOLUME=/opt/seafile-caddy\n\nSEAFILE_MYSQL_DB_HOST=db\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n\nTIME_ZONE=Etc/UTC\n\nJWT_PRIVATE_KEY=\n\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_SERVER_PROTOCOL=https\n\nINIT_SEAFILE_ADMIN_EMAIL=me@example.com\nINIT_SEAFILE_ADMIN_PASSWORD=asecret\nINIT_S3_STORAGE_BACKEND_CONFIG=false # pro edition only\nINIT_S3_COMMIT_BUCKET=<your-commit-objects> # pro edition only\nINIT_S3_FS_BUCKET=<your-fs-objects> # pro edition only\nINIT_S3_BLOCK_BUCKET=<your-block-objects> # pro edition only\nINIT_S3_KEY_ID=<your-key-id> # pro edition only\nINIT_S3_SECRET_KEY=<your-secret-key> # pro edition only\n\nCLUSTER_INIT_MODE=true # cluster only\nCLUSTER_INIT_MEMCACHED_HOST=<your memcached host> # cluster only\nCLUSTER_INIT_ES_HOST=<your elasticsearch server HOST> # cluster only\nCLUSTER_INIT_ES_PORT=9200 # cluster only\nCLUSTER_MODE=frontend # cluster only\n\n\nSEADOC_IMAGE=seafileltd/sdoc-server:1.0-latest\nSEADOC_VOLUME=/opt/seadoc-data\n\nENABLE_SEADOC=false\nSEADOC_SERVER_URL=http://seafile.example.com/sdoc-server\n\n\nNOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:12.0-latest\nNOTIFICATION_SERVER_VOLUME=/opt/notification-data\n
"},{"location":"config/env/#seafile-docker-configurations","title":"Seafile-docker configurations","text":""},{"location":"config/env/#components-configurations","title":"Components configurations","text":"COMPOSE_FILE
: .yml
files for components of Seafile-docker, each .yml
must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR
. The core components are involved in seafile-server.yml
and caddy.yml
which must be taken in this term.COMPOSE_PATH_SEPARATOR
: The symbol used to separate the .yml
files in term COMPOSE_FILE
, default is ','.SEAFILE_IMAGE
: The image of Seafile-server, default is seafileltd/seafile-pro-mc:12.0-latest
.SEAFILE_DB_IMAGE
: Database server image, default is mariadb:10.11
.SEAFILE_MEMCACHED_IMAGE
: Cached server image, default is memcached:1.6.29
SEAFILE_ELASTICSEARCH_IMAGE
: Only valid in pro edition. The elasticsearch image, default is elasticsearch:8.15.0
.SEAFILE_CADDY_IMAGE
: Caddy server image, default is lucaslorentz/caddy-docker-proxy:2.9
.SEADOC_IMAGE
: Only valid after integrating SeaDoc. SeaDoc server image, default is seafileltd/sdoc-server:1.0-latest
.SEAFILE_VOLUME
: The volume directory of Seafile data, default is /opt/seafile-data
.SEAFILE_MYSQL_VOLUME
: The volume directory of MySQL data, default is /opt/seafile-mysql/db
.SEAFILE_CADDY_VOLUME
: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddy
.SEAFILE_ELASTICSEARCH_VOLUME
: Only valid in pro edition. The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/data
.SEADOC_VOLUME
: Only valid after integrating SeaDoc. The volume directory of SeaDoc server data, default is /opt/seadoc-data
.SEAFILE_MYSQL_DB_HOST
: The host address of Mysql, default is the pre-defined service name db
in Seafile-docker instance.INIT_SEAFILE_MYSQL_ROOT_PASSWORD
: (Only required on first deployment) The root
password of MySQL. SEAFILE_MYSQL_DB_USER
: The user of MySQL (database
- user
can be found in conf/seafile.conf
).SEAFILE_MYSQL_DB_PASSWORD
: The user seafile
password of MySQL.SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
: The name of Seafile database name, default is seafile_db
SEAFILE_MYSQL_DB_CCNET_DB_NAME
: The name of ccnet database name, default is ccnet_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
: The name of seahub database name, default is seahub_db
SEAFILE_MYSQL_DB_PASSWORD
: The user seafile
password of MySQLJWT
: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1
SEAFILE_SERVER_HOSTNAME
: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL
: Seafile server protocol (http or https)TIME_ZONE
: Time zone (default UTC)INIT_SEAFILE_ADMIN_EMAIL
: Admin usernameINIT_SEAFILE_ADMIN_PASSWORD
: Admin passwordENABLE_SEADOC
: Enable the SeaDoc server or not, default is false
.SEADOC_SERVER_URL
: Only valid in ENABLE_SEADOC=true
. Url of Seadoc server (e.g., http://seafile.example.com/sdoc-server).CLUSTER_INIT_MODE
: (only valid in pro edition at deploying first time). Cluster initialization mode, in which the necessary configuration files for the service to run will be generated (but the service will not be started). If the configuration file already exists, no operation will be performed. The default value is true
. When the configuration file is generated, be sure to set this item to false
.CLUSTER_INIT_MEMCACHED_HOST
: (only valid in pro edition at deploying first time). Cluster Memcached host. (If your Memcached server dose not use port 11211
, please modify the seahub_settings.py and seafile.conf).CLUSTER_INIT_ES_HOST
: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server host.CLUSTER_INIT_ES_PORT
: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server port. Default is 9200
.CLUSTER_MODE
: Seafile service node type, i.e., frontend
(default) or backend
INIT_S3_STORAGE_BACKEND_CONFIG
: Whether to configure S3 storage backend synchronously during initialization (i.e., the following features in this section, for more details, please refer to AWS S3), default is false
.INIT_S3_COMMIT_BUCKET
: S3 storage backend fs objects bucketINIT_S3_FS_BUCKET
: S3 storage backend block objects bucketINIT_S3_BLOCK_BUCKET
: S3 storage backend block objects bucketINIT_S3_KEY_ID
: S3 storage backend key IDINIT_S3_SECRET_KEY
: S3 storage backend secret keyINIT_S3_USE_V4_SIGNATURE
: Use the v4 protocol of S3 if enabled, default is true
INIT_S3_AWS_REGION
: Region of your buckets (AWS only), default is us-east-1
. (Only valid when INIT_S3_USE_V4_SIGNATURE
sets to true
)INIT_S3_HOST
: Host of your buckets, default is s3.us-east-1.amazonaws.com
. (Only valid when INIT_S3_USE_V4_SIGNATURE
sets to true
)INIT_S3_USE_HTTPS
: Use HTTPS connections to S3 if enabled, default is true
This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation
"},{"location":"config/ldap_in_11.0_ce/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_11.0_ce/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name
, e.g. john@example.com
. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth
to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth
table
Add the following options to seahub_settings.py
. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
Meaning of some options:
variable descriptionLDAP_SERVER_URL
The URL of LDAP server LDAP_BASE_DN
The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN
DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com
LDAP_ADMIN_PASSWORD
Password of LDAP_ADMIN_DN
LDAP_PROVIDER
Identify the source of the user, used in the table social_auth_usersocialauth
, defaults to 'ldap' LDAP_LOGIN_ATTR
User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR
LDAP user's contact_email
attribute LDAP_USER_ROLE_ATTR
LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR
Attribute for user's first name. It's \"givenName\"
by default. LDAP_USER_LAST_NAME_ATTR
Attribute for user's last name. It's \"sn\"
by default. LDAP_USER_NAME_REVERSE
In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER
Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN
and LDAP_ADMIN_DN
:
To determine the LDAP_BASE_DN
, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com
as LDAP_BASE_DN
(with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery
command on the domain controller to find out the DN for this OU. For example, if the OU is staffs
, you can run dsquery ou -name staff
. More information can be found here.
AD supports user@domain.name
format for the LDAP_ADMIN_DN
option. For example you can use administrator@example.com for LDAP_ADMIN_DN
. Sometime the domain controller doesn't recognize this format. You can still use dsquery
command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser
. More information here.
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN
option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n
"},{"location":"config/ldap_in_11.0_ce/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER
option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER))
. $LOGIN_ATTR
and $LDAP_FILTER
will be replaced by your option values.
For example, add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
Note that the case of attribute names in the above example is significant. The memberOf
attribute is only available in Active Directory.
You can use the LDAP_FILTER
option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery
command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup
.
Add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_11.0_ce/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL
as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\n
"},{"location":"config/ldap_in_11.0_pro/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"config/ldap_in_11.0_pro/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_11.0_pro/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name
, e.g. john@example.com
. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth
to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth
table
Add the following options to seahub_settings.py
. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
Meaning of some options:
variable descriptionLDAP_SERVER_URL
The URL of LDAP server LDAP_BASE_DN
The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN
DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com
LDAP_ADMIN_PASSWORD
Password of LDAP_ADMIN_DN
LDAP_PROVIDER
Identify the source of the user, used in the table social_auth_usersocialauth
, defaults to 'ldap' LDAP_LOGIN_ATTR
User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR
LDAP user's contact_email
attribute LDAP_USER_ROLE_ATTR
LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR
Attribute for user's first name. It's \"givenName\"
by default. LDAP_USER_LAST_NAME_ATTR
Attribute for user's last name. It's \"sn\"
by default. LDAP_USER_NAME_REVERSE
In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER
Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN
and LDAP_ADMIN_DN
:
To determine the LDAP_BASE_DN
, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com
as LDAP_BASE_DN
(with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery
command on the domain controller to find out the DN for this OU. For example, if the OU is staffs
, you can run dsquery ou -name staff
. More information can be found here.
AD supports user@domain.name
format for the LDAP_ADMIN_DN
option. For example you can use administrator@example.com for LDAP_ADMIN_DN
. Sometime the domain controller doesn't recognize this format. You can still use dsquery
command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser
. More information here.
In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database.
User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page.
"},{"location":"config/ldap_in_11.0_pro/#sync-configuration-items","title":"Sync configuration items","text":"Add the following options to seahub_settings.py
. Examples are as follows:
# Basic configuration items\nENABLE_LDAP = True\n......\n\n# ldap user sync options.\nLDAP_SYNC_INTERVAL = 60 \nENABLE_LDAP_USER_SYNC = True \nLDAP_USER_OBJECT_CLASS = 'person'\nLDAP_DEPT_ATTR = '' \nLDAP_UID_ATTR = '' \nLDAP_AUTO_REACTIVATE_USERS = True \nLDAP_USE_PAGED_RESULT = False \nIMPORT_NEW_USER = True \nACTIVATE_USER_WHEN_IMPORT = True \nDEACTIVE_USER_IF_NOTFOUND = False \nENABLE_EXTRA_USER_INFO_SYNC = True \n
Meaning of some options:
Variable Description LDAP_SYNC_INTERVAL The interval to sync. Unit is minutes. Defaults to 60 minutes. ENABLE_LDAP_USER_SYNC set to \"true\" if you want to enable ldap user synchronization LDAP_USER_OBJECT_CLASS This is the name of the class used to search for user objects. In Active Directory, it's usually \"person\". The default value is \"person\". LDAP_DEPT_ATTR Attribute for department info. LDAP_UID_ATTR Attribute for Windows login name. If this is synchronized, users can also log in with their Windows login name. In AD, the attributesAMAccountName
can be used as UID_ATTR
. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table). LDAP_AUTO_REACTIVATE_USERS Whether to auto activate deactivated user, default by 'true' LDAP_USE_PAGED_RESULT Whether to use pagination extension. It is useful when you have more than 1000 users in LDAP server. IMPORT_NEW_USER Whether to import new users when sync user. ACTIVE_USER_WHEN_IMPORT Whether to activate the user automatically when imported. DEACTIVE_USER_IF_NOTFOUND set to \"true\" if you want to deactivate a user when he/she was deleted in AD server. ENABLE_EXTRA_USER_INFO_SYNC Enable synchronization of additional user information, including user's full name, department, and Windows login name, etc."},{"location":"config/ldap_in_11.0_pro/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable.
Seafile provides a combination of options for such use case. You can modify below option in seahub_settings.py
:
ACTIVATE_USER_WHEN_IMPORT = False\n
This prevents Seafile from activating imported users. Then, add below option to seahub_settings.py
:
ACTIVATE_AFTER_FIRST_LOGIN = True\n
This option will automatically activate users when they login to Seafile for the first time.
"},{"location":"config/ldap_in_11.0_pro/#reactivating-users","title":"Reactivating Users","text":"When you set the DEACTIVE_USER_IF_NOTFOUND
option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.
However, sometimes it's desirable to auto reactivate such users. You can modify below option in seahub_settings.py
:
LDAP_AUTO_REACTIVATE_USERS = True\n
"},{"location":"config/ldap_in_11.0_pro/#manually-trigger-synchronization","title":"Manually Trigger Synchronization","text":"To test your LDAP sync configuration, you can run the sync command manually.
To trigger LDAP sync manually:
cd seafile-server-latest\n./pro/pro.py ldapsync\n
For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n
"},{"location":"config/ldap_in_11.0_pro/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"config/ldap_in_11.0_pro/#how-it-works","title":"How It Works","text":"The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way.
Any changes to groups in the database won't propagate back to LDAP;
Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server.
The creator of imported groups will be set to the system admin.
There are two modes of operation:
Periodical: the syncing process will be executed in a fixed interval
Manual: there is a script you can run to trigger the syncing once
Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details.
The following are LDAP group sync related options:
# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n
Meaning of some options:
variable description ENABLE_LDAP_GROUP_SYNC Whether to enable group sync. LDAP_GROUP_OBJECT_CLASS This is the name of the class used to search for group objects. LDAP_GROUP_MEMBER_ATTR The attribute field to use when loading the group's members. For most directory servers, the attribute is \"member\" which is the default value. For \"posixGroup\", it should be set to \"memberUid\". LDAP_USER_ATTR_IN_MEMBERUID The user attribute set in 'memberUid' option, which is used in \"posixGroup\". The default value is \"uid\". LDAP_GROUP_UUID_ATTR Used to uniquely identify groups in LDAP. LDAP_GROUP_FILTER An additional filter to use when searching group objects. If it's set, the final filter used to run search is(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))
; otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS)
. LDAP_USER_GROUP_MEMBER_RANGE_QUERY When a group contains too many members, AD will only return part of them. Set this option to TRUE to make LDAP sync work with large groups. DEL_GROUP_IF_NOT_FOUND Set to \"true\", sync process will delete the group if not found in the LDAP server. LDAP_SYNC_GROUP_AS_DEPARTMENT Whether to sync groups as top-level departments in Seafile. Learn more about departments in Seafile here. LDAP_DEPT_NAME_ATTR Used to get the department name. Tip
The search base for groups is the option LDAP_BASE_DN
.
Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A.
In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set LDAP_GROUP_OBJECT_CLASS
option to posixGroup
. A posixGroup
object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR
option. It's MemberUid
by default. The value of the MemberUid
attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid
, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID
option. Note that posixGroup
doesn't support nested groups.
A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments:
Department supports hierarchy. A department can have any levels of sub-departments.
Department can have storage quota.
Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs.
Options for syncing departments from OU:
LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
"},{"location":"config/ldap_in_11.0_pro/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it.
After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page.
[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\n
To trigger LDAP sync manually,
cd seafile-server-latest\n./pro/pro.py ldapsync\n
For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n
"},{"location":"config/ldap_in_11.0_pro/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"config/ldap_in_11.0_pro/#multiple-base","title":"Multiple BASE","text":"Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN
option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n
"},{"location":"config/ldap_in_11.0_pro/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER
option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER))
. $LOGIN_ATTR
and $LDAP_FILTER
will be replaced by your option values.
For example, add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
The case of attribute names in the above example is significant. The memberOf
attribute is only available in Active Directory
You can use the LDAP_FILTER
option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery
command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup
.
Add below option to seahub_settings.py
:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_11.0_pro/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL
as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\n
"},{"location":"config/ldap_in_11.0_pro/#use-paged-results-extension","title":"Use paged results extension","text":"LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension.
In Seafile Pro Edition, add this option to seahub_settings.py
to enable PR:
LDAP_USE_PAGED_RESULT = True\n
"},{"location":"config/ldap_in_11.0_pro/#follow-referrals","title":"Follow referrals","text":"Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article.
To configure, add below option to seahub_settings.py
, e.g.:
LDAP_FOLLOW_REFERRALS = True\n
"},{"location":"config/ldap_in_11.0_pro/#configure-multi-ldap-servers","title":"Configure Multi-ldap Servers","text":"Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile.
Currently, only two LDAP servers are supported.
If you want to use multi-ldap servers, please replace LDAP
in the options with MULTI_LDAP_1
, and then add them to seahub_settings.py
, for example:
# Basic config options\nENABLE_LDAP = True\n......\n\n# Multi ldap config options\nENABLE_MULTI_LDAP_1 = True\nMULTI_LDAP_1_SERVER_URL = 'ldap://192.168.0.2'\nMULTI_LDAP_1_BASE_DN = 'ou=test,dc=seafile,dc=top'\nMULTI_LDAP_1_ADMIN_DN = 'administrator@example.top'\nMULTI_LDAP_1_ADMIN_PASSWORD = 'Hello@123'\nMULTI_LDAP_1_PROVIDER = 'ldap1'\nMULTI_LDAP_1_LOGIN_ATTR = 'userPrincipalName'\n\n# Optional configs\nMULTI_LDAP_1_USER_FIRST_NAME_ATTR = 'givenName'\nMULTI_LDAP_1_USER_LAST_NAME_ATTR = 'sn'\nMULTI_LDAP_1_USER_NAME_REVERSE = False\nENABLE_MULTI_LDAP_1_EXTRA_USER_INFO_SYNC = True\n\nMULTI_LDAP_1_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \nMULTI_LDAP_1_USE_PAGED_RESULT = False\nMULTI_LDAP_1_FOLLOW_REFERRALS = True\nENABLE_MULTI_LDAP_1_USER_SYNC = True\nENABLE_MULTI_LDAP_1_GROUP_SYNC = True\nMULTI_LDAP_1_SYNC_DEPARTMENT_FROM_OU = True\n\nMULTI_LDAP_1_USER_OBJECT_CLASS = 'person'\nMULTI_LDAP_1_DEPT_ATTR = ''\nMULTI_LDAP_1_UID_ATTR = ''\nMULTI_LDAP_1_CONTACT_EMAIL_ATTR = ''\nMULTI_LDAP_1_USER_ROLE_ATTR = ''\nMULTI_LDAP_1_AUTO_REACTIVATE_USERS = True\n\nMULTI_LDAP_1_GROUP_OBJECT_CLASS = 'group'\nMULTI_LDAP_1_GROUP_FILTER = ''\nMULTI_LDAP_1_GROUP_MEMBER_ATTR = 'member'\nMULTI_LDAP_1_GROUP_UUID_ATTR = 'objectGUID'\nMULTI_LDAP_1_CREATE_DEPARTMENT_LIBRARY = False\nMULTI_LDAP_1_DEPT_REPO_PERM = 'rw'\nMULTI_LDAP_1_DEFAULT_DEPARTMENT_QUOTA = -2\nMULTI_LDAP_1_SYNC_GROUP_AS_DEPARTMENT = False\nMULTI_LDAP_1_USE_GROUP_MEMBER_RANGE_QUERY = False\nMULTI_LDAP_1_USER_ATTR_IN_MEMBERUID = 'uid'\nMULTI_LDAP_1_DEPT_NAME_ATTR = ''\n......\n
!!! note: There are still some shared config options are used for all LDAP servers, as follows:
```python\n# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n```\n
"},{"location":"config/ldap_in_11.0_pro/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set
SSO_LDAP_USE_SAME_UID = True\n
Here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR
(not LDAP_UID_ATTR
), in ADFS it is uid
attribute. You need make sure you use the same attribute for the two settings
On this basis, if you only want users to login using OSS and not through LDAP, you can set
USE_LDAP_SYNC_ONLY = True\n
"},{"location":"config/ldap_in_11.0_pro/#importing-roles-from-ldap","title":"Importing Roles from LDAP","text":"Seafile Pro Edition supports syncing roles from LDAP or Active Directory.
To enable this feature, add below option to seahub_settings.py
, e.g.
LDAP_USER_ROLE_ATTR = 'title'\n
LDAP_USER_ROLE_ATTR
is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py
under conf/ and edit it like:
# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n
You should only define one of the two functions
You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced.
"},{"location":"config/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution.
"},{"location":"config/multi_institutions/#turn-on-the-feature","title":"Turn on the feature","text":"In seahub_settings.py
, add MULTI_INSTITUTION = True
to enable multi-institution feature, and add
EXTRA_MIDDLEWARE += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n )\n
Please replease +=
to =
if EXTRA_MIDDLEWARE_CLASSES
or EXTRA_MIDDLEWARE
is not defined
After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose profile.institution
match the name.
If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"givenname\": (False, \"givenname\"),\n \"sn\": (False, \"surname\"),\n \"mail\": (False, \"contact_email\"),\n \"organization\": (False, \"institution\"),\n}\n
"},{"location":"config/multi_tenancy/","title":"Multi-Tenancy Support","text":"Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations.
"},{"location":"config/multi_tenancy/#seafile-config","title":"Seafile Config","text":""},{"location":"config/multi_tenancy/#seafileconf","title":"seafile.conf","text":"[general]\nmulti_tenancy = true\n
"},{"location":"config/multi_tenancy/#seahub_settingspy","title":"seahub_settings.py","text":"CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
"},{"location":"config/multi_tenancy/#usage","title":"Usage","text":"An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d.
Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned.
After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users.
"},{"location":"config/multi_tenancy/#adfssaml-single-sign-on-integration-in-multi-tenancy","title":"ADFS/SAML single sign-on integration in multi-tenancy","text":""},{"location":"config/multi_tenancy/#preparation-for-adfssaml","title":"Preparation for ADFS/SAML","text":"1) Prepare SP(Seafile) certificate directory and SP certificates:
Create sp certs dir
$ mkdir -p /opt/seafile-data/seafile/seahub-data/certs\n
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
$ cd /opt/seafile-data/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt\n
The days
option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
Note
If certificates are not placed in /opt/seafile-data/seafile/seahub-data/certs
, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n
2) Add the following configuration to seahub_settings.py and then restart Seafile:
ENABLE_MULTI_ADFS = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
"},{"location":"config/multi_tenancy/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":"Please refer to this document.
"},{"location":"config/oauth/","title":"OAuth Authentication","text":""},{"location":"config/oauth/#oauth","title":"OAuth","text":"Before using OAuth, you should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py.
"},{"location":"config/oauth/#register-an-oauth2-client-application","title":"Register an OAuth2 client application","text":"Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed.
"},{"location":"config/oauth/#configuration","title":"Configuration","text":"Add the folllowing configurations to seahub_settings.py:
ENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\n
"},{"location":"config/oauth/#more-explanations-about-the-settings","title":"More explanations about the settings","text":"OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN
OAUTH_PROVIDER_DOMAIN
will be deprecated, and it can be replaced by OAUTH_PROVIDER
. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters.
OAUTH_ATTRIBUTE_MAP
This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below:
OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n }\n
If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
The key part id
stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True
stands for if this field is mandatory by Seafile.
Since 11.0 version, Seafile use uid
as the external unique identifier of the user. It stores uid
in table social_auth_usersocialauth
and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id
or uid
or username
, etc. And the id/email config id: (True, email)
is deprecated.
If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\")
.
If you use a newly deployed 11.0+ Seafile instance, you don't need the \"id\": (True, \"email\")
item. Your configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
"},{"location":"config/oauth/#sample-settings","title":"Sample settings","text":"GoogleGithubGitLabAzure Cloud ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
Note
For Github, email
is not the unique identifier for an user, but id
is in most cases, so we use id
as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id
and OAUTH_PROVIDER_DOMAIN
, which is github.com in your case, to an email format string and then create this account if not exist.
ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\nOAUTH_PROVIDER_DOMAIN = 'github.com'\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, 'uid'),\n \"email\": (False, \"contact_email\"),\n \"name\": (False, \"name\"),\n}\n
Note
To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications).
Fill in required fields:
Name: a name you specify
Redirect URI: The callback url see below OAUTH_REDIRECT_URL
Trusted: Skip confirmation dialog page. Select this to not ask the user if he wants to authorize seafile to receive access to his/her account data.
Scopes: Select openid
and read_user
in the scopes list.
Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py
ENABLE_OAUTH = True\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = \"https://your-seafile/oauth/callback/\"\n\nOAUTH_PROVIDER_DOMAIN = 'your-domain'\nOAUTH_AUTHORIZATION_URL = 'https://gitlab.your-domain/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://gitlab.your-domain/oauth/token'\nOAUTH_USER_INFO_URL = 'https://gitlab.your-domain/api/v4/user'\nOAUTH_SCOPE = [\"openid\", \"read_user\"]\nOAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n
Note
For users of Azure Cloud, as there is no id
field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP
setting (others are the same as Github/Google). Please see this tutorial for the complete deployment process of OAuth against Azure Cloud.
OAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n
"},{"location":"config/ocm/","title":"Open Cloud Mesh","text":"From 8.0.0, Seafile supports OCM protocol. With OCM, user can share library to other server which enabled OCM too.
Seafile currently supports sharing between Seafile servers with version greater than 8.0, and sharing from NextCloud to Seafile since 9.0.
These two functions cannot be enabled at the same time
"},{"location":"config/ocm/#configuration","title":"Configuration","text":"Add the following configuration to seahub_settings.py
.
# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\n
# Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\n
OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with
"},{"location":"config/ocm/#usage","title":"Usage","text":""},{"location":"config/ocm/#share-library-to-other-server","title":"Share library to other server","text":"In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing.
"},{"location":"config/ocm/#view-be-shared-libraries","title":"View be shared libraries","text":"You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing.
And enter the library to view, download or upload files.
"},{"location":"config/remote_user/","title":"SSO using Remote User","text":"Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers.
After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information.
Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks
Please add the following settings to conf/seahub_settings.py
to enable this feature.
ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_Shibboleth-affiliation': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n
Then restart Seafile.
"},{"location":"config/roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. For most permissions, the meaning can be easily obtained from the variable name. The following is a further detailed introduction to some variables.
role_quota
is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g'
, and leave other role of users to the default quota.
can_add_public_repo
is to set whether a role can create a public library, default is False
.
Since version 11.0.9 pro, can_share_repo
is added to limit users' ability to share a library
The can_add_public_repo
option will not take effect if you configure global CLOUD_MODE = True
storage_ids
permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.
upload_rate_limit
and download_rate_limit
are added to limit upload and download speed for users with different roles.
Note
After configured the rate limit, run the following command in the seafile-server-latest
directory to make the configuration take effect:
./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n
can_drag_drop_folder_to_sync
: allow or deny user to sync folder by draging and droping
can_export_files_via_mobile_client
: allow or deny user to export files in using mobile client
Seafile comes with two build-in roles default
and guest
, a default user is a normal user with permissions as followings:
'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 0, # unit: kb/s\n 'download_rate_limit': 0,\n },\n
While a guest user can only read files/folders in the system, here are the permissions for a guest user:
'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 0,\n 'download_rate_limit': 0,\n },\n
"},{"location":"config/roles_permissions/#edit-build-in-roles","title":"Edit build-in roles","text":"If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to seahub_settings.py
with corresponding permissions set to True
.
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n }\n}\n
"},{"location":"config/roles_permissions/#more-about-guest-invitation-feature","title":"More about guest invitation feature","text":"An user who has can_invite_guest
permission can invite people outside of the organization as guest.
In order to use this feature, in addition to granting can_invite_guest
permission to the user, add the following line to seahub_settings.py
,
ENABLE_GUEST_INVITATION = True\n\n# invitation expire time\nINVITATIONS_TOKEN_AGE = 72 # hours\n
After restarting, users who have can_invite_guest
permission will see \"Invite People\" section at sidebar of home page.
Users can invite a guest user by providing his/her email address, system will email the invite link to the user.
Tip
If you want to block certain email addresses for the invitation, you can define a blacklist, e.g.
INVITATION_ACCEPTER_BLACKLIST = [\"a@a.com\", \"*@a-a-a.com\", r\".*@(foo|bar).com\", ]\n
After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed.
"},{"location":"config/roles_permissions/#add-custom-roles","title":"Add custom roles","text":"If you want to add a new role and assign some users with this role, e.g. new role employee
can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n },\n 'employee': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 500,\n 'download_rate_limit': 800,\n },\n}\n
"},{"location":"config/saml2_in_10.0/","title":"SAML 2.0 in version 10.0+","text":"In this document, we use Microsoft Azure SAML single sign-on app and Microsoft on-premise ADFS to show how Seafile integrate SAML 2.0. Other SAML 2.0 provider should be similar.
"},{"location":"config/saml2_in_10.0/#preparations-for-saml-20","title":"Preparations for SAML 2.0","text":"First, install xmlsec1 package:
$ apt update\n$ apt install xmlsec1\n$ apt install dnsutils # For multi-tenancy feature\n
Second, prepare SP(Seafile) certificate directory and SP certificates:
Create certs dir
$ mkdir -p /opt/seafile/seahub-data/certs\n
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
$ cd /opt/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout sp.key -out sp.crt\n
The days
option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users.
Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.:
Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.:
Next, download the base64 format SAML app's certificate and rename to idp.crt:
and put it under the certs directory(/opt/seafile/seahub-data/certs
).
Next, copy the metadata URL of the SAML app:
and paste it into the SAML_REMOTE_METADATA_URL
option in seahub_settings.py, e.g.:
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Next, add ENABLE_ADFS_LOGIN
, LOGIN_REDIRECT_URL
and SAML_ATTRIBUTE_MAPPING
options to seahub_settings.py, and then restart Seafile, e.g:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Note
/usr/bin/xmlsec1
, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n
View where the xmlsec1 binary is located:
$ which xmlsec1\n
/opt/seafile/seahub-data/certs
, you need to add the following configuration in seahub_settings.py:SAML_CERTS_DIR = '/path/to/certs'\n
Finally, open the browser and enter the Seafile login page, click Single Sign-On
, and use the user assigned to SAML app to perform a SAML login test.
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
First, please make sure the following preparations are done:
A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use temp.adfs.com
as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com
as the domain name example.
Second, download the base64 format certificate and upload it:
Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates.
Locate the Token-signing certificate. Right-click the certificate and select View Certificate.
In the dialog box, select the Details tab.
Click Copy to File.
In the Certificate Export Wizard that opens, click Next.
Select Base-64 encoded X.509 (.CER), then click Next.
Named it idp.crt, then click Next.
Click Finish to complete the download.
And then put it under the certs directory(/opt/seafile/seahub-data/certs
).
Next, add the following configurations to seahub_settings.py and then restart Seafile:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n
Next, add relying party trust:
Log into the ADFS server and open the ADFS management.
Under Actions, click Add Relying Party Trust.
On the Welcome page, choose Claims aware and click Start.
Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: https://example.com/saml2/metadata/
, e.g.:
On the Specify Display Name page type a name in Display name, e.g. Seafile
, under Notes type a description for this relying party trust, and then click Next.
In the Choose an access control policy window, select Permit everyone, then click Next.
Review your settings, then click Next.
Click Close.
Next, create claims rules:
Open the ADFS management, click Relying Party Trusts.
Right-click your trust, and then click Edit Claim Issuance Policy.
On the Issuance Transform Rules tab click Add Rules.
Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next.
In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish.
Click Add Rule again.
Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next.
In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Seafile Claim rule
). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.
Click OK to add both new rules.
When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service
Finally, open the browser and enter the Seafile login page, click Single Sign-On
to perform ADFS login test.
In the file seafevents.conf
:
[DATABASE]\ntype = mysql\nhost = 192.168.0.2\nport = 3306\nusername = seafile\npassword = password\nname = seahub_db\n\n[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = false\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n
"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[REDIS]\n## redis use the 0 database and \"repo_update\" channel\nserver = 192.168.1.1\nport = 6379\npassword = q!1w@#123\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n
"},{"location":"config/seafile-conf/","title":"Seafile.conf settings","text":"Important
Every entry in this configuration file is case-sensitive.
You need to restart seafile and seahub so that your changes take effect.
./seahub.sh restart\n./seafile.sh restart\n
"},{"location":"config/seafile-conf/#storage-quota-setting","title":"Storage Quota Setting","text":"You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to seafile.conf
file
[quota]\n# default user quota in GB, integer only\ndefault = 2\n
This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page.
Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default.
[quota]\nlibrary_file_limit = 100000\n
"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"If you don't want to keep all file revision history, you may set a default history length limit for all libraries.
[history]\nkeep_days = days of history to keep\n
"},{"location":"config/seafile-conf/#default-trash-expiration-time","title":"Default trash expiration time","text":"The default time for automatic cleanup of the libraries trash is 30 days.You can modify this time by adding the following configuration\uff1a
[library_trash]\nexpire_days = 60\n
"},{"location":"config/seafile-conf/#system-trash","title":"System Trash","text":"Seafile uses a system trash, where deleted libraries will be moved to. In this way, accidentally deleted libraries can be recovered by system admin.
"},{"location":"config/seafile-conf/#cache-pro-edition-only","title":"Cache (Pro Edition Only)","text":"Seafile Pro Edition uses memory caches in various cases to improve performance. Some session information is also saved into memory cache to be shared among the cluster nodes. Memcached or Reids can be use for memory cache.
If you use memcached:
[memcached]\n# Replace `localhost` with the memcached address:port if you're using remote memcached\n# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.\nmemcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100\n
If you use redis:
[redis]\n# your redis server address\nredis_host = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n
Redis support is added in version 11.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
"},{"location":"config/seafile-conf/#seafile-fileserver-configuration","title":"Seafile fileserver configuration","text":"The configuration of seafile fileserver is in the [fileserver]
section of the file seafile.conf
[fileserver]\n# bind address for fileserver\n# default to 0.0.0.0, if deployed without proxy: no access restriction\n# set to 127.0.0.1, if used with local proxy: only access by local\nhost = 127.0.0.1\n# tcp port for fileserver\nport = 8082\n
Since Community Edition 6.2 and Pro Edition 6.1.9, you can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases.
[fileserver]\nworker_threads = 15\n
Change upload/download settings.
[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n
After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing:
[fileserver]\nmax_indexing_threads = 10\n
When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here.
[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n
When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value.
[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n
You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it.
[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n
The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago.
[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\n
New in Seafile Pro 7.1.16 and Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout
configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server.
Since Pro 8.0.4 version, you can set both options to -1, to allow unlimited size and timeout.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation.
use_block_cache
option in the [fileserver]
group. It's not enabled by default. block_cache_size_limit
option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache
directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.block_cache_file_types
configuration is used to choose the file types that are cached. block_cache_file_types
the default value is mp4;mov.[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\n
When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the skip_block_hash
option to use a random string as block ID. Warning
This option will prevent fsck from checking block content integrity. You should specify --shallow
option to fsck to not check content integrity.
[fileserver]\nskip_block_hash = true\n
If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set file_ext_white_list
option in the [fileserver]
group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default.
[fileserver]\nfile_ext_white_list = md;mp4;mov\n
Since seafile 10.0.1, when you use go fileserver, you can set upload_limit
and download_limit
option in the [fileserver]
group to limit the speed of file upload and download. It's not enabled by default.
[fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n
Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan.
[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n
"},{"location":"config/seafile-conf/#database-configuration","title":"Database configuration","text":"The configurations of database are stored in the [database]
section.
From Seafile 11.0, the SQLite is not supported
[database]\ntype=mysql\nhost=127.0.0.1\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nmax_connections=100\n
When you configure seafile server to use MySQL, the default connection pool size is 100, which should be enough for most use cases.
Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options:
[database]\nuse_ssl = true\nskip_verify = false\nca_path = /etc/mysql/ca.pem\n
When set use_ssl
to true and skip_verify
to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path
. The ca_path
is a trusted CA certificate path for signing MySQL server certificates. When skip_verify
is true, there is no need to add the ca_path
option. The MySQL server certificate won't be verified at this time.
The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file.
[file_lock]\ndefault_expire_hours = 6\n
The default is 12 hours.
Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). Since Pro Edition 12, this option is enabled by default.
[file_lock]\nuse_locked_file_cache = true\n
At the same time, you also need to configure the following memcache options for the cache to take effect:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"config/seafile-conf/#storage-backends","title":"Storage Backends","text":"You may configure Seafile to use various kinds of object storage backends.
You may also configure Seafile to use multiple storage backends at the same time.
"},{"location":"config/seafile-conf/#cluster","title":"Cluster","text":"When you deploy Seafile in a cluster, you should add the following configuration:
[cluster]\nenabled = true\n
Tip
Since version 12, if you use Docker to deploy cluster, this option is no longer needed.
"},{"location":"config/seafile-conf/#enable-slow-log","title":"Enable Slow Log","text":"Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default.
If you want to configure related options, add the options to seafile.conf:
[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\n
You can find seafile_slow_rpc.log
in logs/slow_logs
. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2
to seaf-server
process. The slow log file will be closed and reopened.
Since 9.0.2 Pro, the signal to trigger log rotation has been changed to SIGUSR1
. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.
Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver.
To enable access log, add below options to seafile.conf:
[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\n
The log format is as following:
start time - user id - url - response code - process time\n
You can use SIGUSR1
to trigger log rotation.
Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf:
[fileserver]\nuse_go_fileserver = true\n
Go fileserver has 3 advantages over the traditional fileserver implemented in C language:
max_sync_file_count
to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.max_download_dir_size
is thus no longer needed by Go fileserver.Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options.
[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n
"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options:
# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n
This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration.
go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n
"},{"location":"config/seafile-conf/#notification-server-configuration","title":"Notification server configuration","text":"Since Seafile 10.0.0, you can ask Seafile server to send notifications (file changes, lock changes and folder permission changes) to Notification Server component.
[notification]\nenabled = true\n# IP address of the server running notification server\n# or \"notification-server\" if you are running notification server container on the same host as Seafile server\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
Tip
The configuration here only works for version >= 12.0. The configuration for notificaton server has been changed in 12.0 to make it clearer. The new configuration is not compatible with older versions.
"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":"Create customize folder
Deploy in DockerDeploy from binary packagesmkdir -p /opt/seafile-data/seahub/media/custom\n
mkdir /opt/seafile/seafile-server-latest/seahub/media/custom\n
During upgrading, Seafile upgrade script will create symbolic link automatically to preserve your customization.
"},{"location":"config/seahub_customization/#customize-logo","title":"Customize Logo","text":"Add your logo file to custom/
Overwrite LOGO_PATH
in seahub_settings.py
LOGO_PATH = 'custom/mylogo.png'\n
Default width and height for logo is 149px and 32px, you may need to change that according to yours.
LOGO_WIDTH = 149\nLOGO_HEIGHT = 32\n
"},{"location":"config/seahub_customization/#customize-favicon","title":"Customize Favicon","text":"Add your favicon file to custom/
Overwrite FAVICON_PATH
in seahub_settings.py
LOGO_PATH = 'custom/favicon.png'\n
"},{"location":"config/seahub_customization/#customize-seahub-css","title":"Customize Seahub CSS","text":"Add your css file to custom/
, for example, custom.css
Overwrite BRANDING_CSS
in seahub_settings.py
LOGO_PATH = 'custom/custom.css'\n
"},{"location":"config/seahub_customization/#customize-help-page","title":"Customize help page","text":"Deploy in DockerDeploy from binary packages mkdir -p /opt/seafile-data/seahub/media/custom/templates/help/\ncd /opt/seafile-data/seahub/media/custom\ncp ../../help/templates/help/install.html templates/help/\n
mkdir /opt/seafile/seafile-server-latest/seahub/media/custom/templates/help/\ncd /opt/seafile/seafile-server-latest/seahub/media/custom\ncp ../../help/templates/help/install.html templates/help/\n
Modify the templates/help/install.html
file and save it. You will see the new help page.
You can add an extra note in sharing dialog in seahub_settings.py
ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\n
Result:
"},{"location":"config/seahub_customization/#add-custom-navigation-items","title":"Add custom navigation items","text":"Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the conf/seahub_settings.py
configuration file:
CUSTOM_NAV_ITEMS = [\n {'icon': 'sf2-icon-star',\n 'desc': 'Custom navigation 1',\n 'link': 'https://www.seafile.com'\n },\n {'icon': 'sf2-icon-wiki-view',\n 'desc': 'Custom navigation 2',\n 'link': 'https://www.seafile.com/help'\n },\n {'icon': 'sf2-icon-wrench',\n 'desc': 'Custom navigation 3',\n 'link': 'http://www.example.com'\n },\n]\n
Note
The icon
field currently only supports icons in Seafile that begin with sf2-icon
. You can find the list of icons here:
Then restart the Seahub service to take effect.
Once you log in to the Seafile system homepage again, you will see the new navigation entry under the Tools
navigation bar on the left.
ADDITIONAL_APP_BOTTOM_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/web'\n}\n
Result:
"},{"location":"config/seahub_customization/#add-more-links-to-about-dialog","title":"Add more links to about dialog","text":"ADDITIONAL_ABOUT_DIALOG_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/dtable-web'\n}\n
Result:
"},{"location":"config/seahub_settings_py/","title":"Seahub Settings","text":"Tip
You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add ENABLE_SETTINGS_VIA_WEB = False
to seahub_settings.py
.
Refer to email sending documentation.
"},{"location":"config/seahub_settings_py/#cache","title":"Cache","text":"Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis.
MemcachedRedis# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n
Add the following configuration to seahub_settings.py
.
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
Redis supported is added in Seafile version 11.0
Install Redis with package installers in your OS.
Please refer to Django's documentation about using Redis cache.
# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n
"},{"location":"config/seahub_settings_py/#user-management-options","title":"User management options","text":"The following options affect user registration, password and session.
# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# mininum length for user's password\nUSER_PASSWORD_MIN_LENGTH = 6\n\n# LEVEL based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nUSER_PASSWORD_STRENGTH_LEVEL = 3\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# Whether enable the feature \"published library\". Default is `False`\n# Since 6.1.0 CE\nENABLE_WIKI = True\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n
"},{"location":"config/seahub_settings_py/#library-snapshot-label-feature","title":"Library snapshot label feature","text":"# Turn on this option to let users to add a label to a library snapshot. Default is `False`\nENABLE_REPO_SNAPSHOT_LABEL = False\n
"},{"location":"config/seahub_settings_py/#library-options","title":"Library options","text":"Options for libraries:
# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not recommended any more.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# Since version 12, you can choose password hash algorithm for new encrypted libraries.\n# The password is used to encrypt the encryption key. So using a secure password hash algorithm to\n# prevent brute-force password guessing is important.\n# Before version 12, a fixed algorithm (PBKDF2-SHA256 with 1000 iterations) is used.\n#\n# Currently two hash algorithms are supported.\n# - PBKDF2: The only available parameter is the number of iterations. You need to increase the\n# the number of iterations over time, as GPUs are more and more used for such calculation.\n# The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.\n# - Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that\n# can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas,\n# e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm\n# on https://github.com/P-H-C/phc-winner-argon2 .\n#\n# Note that only sync client >= 9.0.9 and SeaDrive >= 3.0.12 supports syncing libraries created with these algorithms.\nENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"argon2id\"\nENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"2,102400,8\"\n# ENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"pbkdf2_sha256\"\n# ENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"600000\"\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n
Options for online file preview:
# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n\n# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first.\n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails.html\n# NOTE: this option is deprecated in version 7.1\nENABLE_VIDEO_THUMBNAIL = False\n\n# Use the frame at 5 second as thumbnail\n# NOTE: this option is deprecated in version 7.1\nTHUMBNAIL_VIDEO_FRAME_TIME = 5\n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n\n# Default size for picture preview. Enlarge this size can improve the preview quality.\n# NOTE: since version 6.1.1\nTHUMBNAIL_SIZE_FOR_ORIGINAL = 1024\n
"},{"location":"config/seahub_settings_py/#cloud-mode","title":"Cloud Mode","text":"You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it.
# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n
"},{"location":"config/seahub_settings_py/#single-sign-on","title":"Single Sign On","text":"# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS instead of email and password\n# Default is False\n# Since 11.0.7\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n
"},{"location":"config/seahub_settings_py/#other-options","title":"Other options","text":"# This is outside URL for Seahub(Seafile Web). \n# The domain part (i.e., www.example.com) will be used in generating share links and download/upload file via web.\n# Note: SERVICE_URL is moved to seahub_settings.py since 9.0.0\n# Note: SERVICE_URL is no longer used since version 12.0\n# SERVICE_URL = 'https://seafile.example.com:'\n\n# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# If you don't want to run seahub website on your site's root path, set this option to your preferred path.\n# e.g. setting it to '/seahub/' would run seahub on http://example.com/seahub/.\nSITE_ROOT = '/'\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'http{s}://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable user select a template when he/she creates library.\n# When user select a template, Seafile will create folders releated to the pattern automaticly.\n# Since version 6.0\nLIBRARY_TEMPLATES = {\n 'Technology': ['/Develop/Python', '/Test'],\n 'Finance': ['/Current assets', '/Fixed assets/Computer']\n}\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n
"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"# Whether to show the used traffic in user's profile popup dialog. Default is True\nSHOW_TRAFFIC = True\n\n# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n
"},{"location":"config/seahub_settings_py/#restful-api","title":"RESTful API","text":"# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/12.0/setup_binary/ce/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\n
"},{"location":"config/seahub_settings_py/#seahub-custom-functions","title":"Seahub Custom Functions","text":"Since version 6.2, you can define a custom function to modify the result of user search function.
For example, if you want to limit user only search users in the same institution, you can define custom_search_user
function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\n
You should NOT change the name of custom_search_user
and seahub_custom_functions/__init__.py
Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to.
For example, if you want to let a user to share library to both its groups and the groups of user test@test.com
, you can define a custom_get_groups
function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\n
You should NOT change the name of custom_get_groups
and seahub_custom_functions/__init__.py
Tip
docker compose restart\n
cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n
There are currently five types of emails sent in Seafile:
The first four types of email are sent immediately. The last type is sent by a background task running periodically.
"},{"location":"config/sending_email/#options-of-email-sending","title":"Options of Email Sending","text":"Please add the following lines to seahub_settings.py
to enable email sending.
EMAIL_USE_TLS = True\nEMAIL_HOST = 'smtp.example.com' # smpt server\nEMAIL_HOST_USER = 'username@example.com' # username and domain\nEMAIL_HOST_PASSWORD = 'password' # password\nEMAIL_PORT = 587\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\n
Note
If your email service still does not work, you can checkout the log file logs/seahub.log
to see what may cause the problem. For a complete email notification list, please refer to email notification list.
If you want to use the email service without authentication leaf EMAIL_HOST_USER
and EMAIL_HOST_PASSWORD
blank (''
). (But notice that the emails then will be sent without a From:
address.)
About using SSL connection (using port 465)
EMAIL_USE_SSL = True
instead of EMAIL_USE_TLS
.reply to
of email","text":"You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link.
# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\n
"},{"location":"config/sending_email/#config-background-email-sending-task","title":"Config background email sending task","text":"The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by seafevents.conf
.
[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n
"},{"location":"config/sending_email/#customize-email-messages","title":"Customize email messages","text":"The simplest way to customize the email messages is setting the SITE_NAME
variable in seahub_settings.py
. If it is not enough for your case, you can customize the email templates.
Tip
Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect.
"},{"location":"config/sending_email/#the-email-base-template","title":"The email base template","text":"seahub/seahub/templates/email_base.html
Tip
You can copy email_base.html to seahub-data/custom/templates/email_base.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/auth/forms.py line:127
send_html_email(_(\"Reset Password on %s\") % site_name,\n email_template_name, c, None, [user.username])\n
Body
seahub/seahub/templates/registration/password_reset_email.html
Tip
You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
Body
seahub/seahub/templates/sysadmin/user_add_email.html
Tip
You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:1224
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Tip
You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:913
try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\n
Body
seahub/seahub/templates/shared_link_email.html
seahub/seahub/templates/shared_upload_link_email.html
Tip
You can copy shared_link_email.html to seahub-data/custom/templates/shared_link_email.html
and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n
Body
seahub/seahub/notifications/templates/notifications/notice_email.html
"},{"location":"config/shibboleth_authentication/","title":"Shibboleth Authentication","text":"Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider.
In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview .
Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user.
Seahub provides a special URL to handle Shibboleth login. The URL is https://your-seafile-domain/sso
. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
https://your-seafile-domain/sso
.https://your-seafile-domain/sso
.HTTP_REMOTE_USER
header) and brings the user to her/his home page.Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL https://your-seafile-domain/sso
needs to be directed to Apache.
The configuration includes 3 steps:
We use CentOS 7 as example.
"},{"location":"config/shibboleth_authentication/#configure-apache","title":"Configure Apache","text":"You should create a new virtual host configuration for Shibboleth. And then restart Apache.
<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName your-seafile-domain\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n ErrorLog ${APACHE_LOG_DIR}/seahub.error.log\n CustomLog ${APACHE_LOG_DIR}/seahub.access.log combined\n\n SSLEngine on\n SSLCertificateFile /path/to/ssl-cert.pem\n SSLCertificateKeyFile /path/to/ssl-key.pem\n\n <Location /Shibboleth.sso>\n SetHandler shib\n AuthType shibboleth\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n <Location /sso>\n SetHandler shib\n AuthType shibboleth\n ShibUseHeaders On\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n RewriteEngine On\n <Location /media>\n Require all granted\n </Location>\n\n # seafile fileserver\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n\n # for http\n # RequestHeader set REMOTE_USER %{REMOTE_USER}e\n # for https\n RequestHeader set REMOTE_USER %{REMOTE_USER}s\n </VirtualHost>\n</IfModule>\n
"},{"location":"config/shibboleth_authentication/#install-and-configure-shibboleth","title":"Install and Configure Shibboleth","text":"Installation and configuration of Shibboleth is out of the scope of this documentation. You can refer to the official Shibboleth document.
"},{"location":"config/shibboleth_authentication/#configure-shibbolethsp","title":"Configure Shibboleth(SP)","text":""},{"location":"config/shibboleth_authentication/#shibboleth2xml","title":"shibboleth2.xml","text":"Open /etc/shibboleth/shibboleth2.xml
and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
ApplicationDefaults
element","text":"Change entityID
and REMOTE_USER
property:
<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\n
Seahub extracts the username from the REMOTE_USER
environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER
environment variable.
In Seafile, only one of the following two attributes can be used for username: eppn
, and mail
. eppn
stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail
is the user's email address. You should set REMOTE_USER
to either one of these attributes.
SSO
element","text":"Change entityID
property:
<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\n
"},{"location":"config/shibboleth_authentication/#metadataprovider-element","title":"MetadataProvider
element","text":"Change url
and backingFilePath
property:
<!-- Example of remotely supplied batch of signed metadata. -->\n<MetadataProvider type=\"XML\" validate=\"true\"\n url=\"http://your-IdP-metadata-url\"\n backingFilePath=\"your-IdP-metadata.xml\" maxRefreshDelay=\"7200\">\n <MetadataFilter type=\"RequireValidUntil\" maxValidityInterval=\"2419200\"/>\n <MetadataFilter type=\"Signature\" certificate=\"fedsigner.pem\" verifyBackup=\"false\"/>\n
"},{"location":"config/shibboleth_authentication/#attribute-mapxml","title":"attribute-map.xml","text":"Open /etc/shibboleth/attribute-map.xml
and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
Attribute
element","text":"Uncomment attribute elements for getting more user info:
<!-- Older LDAP-defined attributes (SAML 2.0 names followed by SAML 1 names)... -->\n<Attribute name=\"urn:oid:2.16.840.1.113730.3.1.241\" id=\"displayName\"/>\n<Attribute name=\"urn:oid:0.9.2342.19200300.100.1.3\" id=\"mail\"/>\n\n<Attribute name=\"urn:mace:dir:attribute-def:displayName\" id=\"displayName\"/>\n<Attribute name=\"urn:mace:dir:attribute-def:mail\" id=\"mail\"/>\n
"},{"location":"config/shibboleth_authentication/#upload-shibbolethsps-metadata","title":"Upload Shibboleth(SP)'s metadata","text":"After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server.
"},{"location":"config/shibboleth_authentication/#configure-seahub","title":"Configure Seahub","text":"Add the following configuration to seahub_settings.py.
ENABLE_SHIB_LOGIN = True\nSHIBBOLETH_USER_HEADER = 'HTTP_REMOTE_USER'\n# basic user attributes\nSHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_DISPLAYNAME\": (False, \"display_name\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n}\nEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_AUTHENTICATION_BACKENDS = (\n 'shibboleth.backends.ShibbolethRemoteUserBackend',\n)\n
Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are:
You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py:
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n}\n
In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs.
You may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP
We also added an option SHIB_ACTIVATE_AFTER_CREATION
(defaults to True
) which control the user status after shibboleth connection. If this option set to False
, user will be inactive after connection, and system admins will be notified by email to activate that account.
Shibboleth has a field called affiliation. It is a list like: employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.
We are able to set user role from Shibboleth. Details about user role, please refer to Roles and Permissions
To enable this, modify SHIBBOLETH_ATTRIBUTE_MAP
above and add Shibboleth-affiliation
field, you may need to change Shibboleth-affiliation
according to your Shibboleth SP attributes.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n \"HTTP_Shibboleth-affiliation\": (False, \"affiliation\"),\n}\n
Then add new config to define affiliation role map,
SHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n
After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP.
"},{"location":"config/shibboleth_authentication/#verify","title":"Verify","text":"After restarting Apache and Seahub service (./seahub.sh restart
), you can then test the shibboleth login workflow.
If you encountered problems when login, follow these steps to get debug info (for Seafile pro 6.3.13).
"},{"location":"config/shibboleth_authentication/#add-this-setting-to-seahub_settingspy","title":"Add this setting toseahub_settings.py
","text":"DEBUG = True\n
"},{"location":"config/shibboleth_authentication/#change-seafiles-code","title":"Change Seafile's code","text":"Open seafile-server-latest/seahub/thirdpart/shibboleth/middleware.py
Insert the following code in line 59
assert False\n
Insert the following code in line 65
if not username:\n assert False\n
The complete code after these changes is as follows:
#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n
Then restart Seafile and relogin, you will see debug info in web page.
"},{"location":"config/single_sign_on/","title":"Single Sign On support in Seafile","text":"Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition.
In the Community Edition:
Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows.
In Pro Edition:
Build Seafile
Seafile Open API
Seafile Implement Details
You can build Seafile from our source code package or from the Github repo directly.
Client
Server
Seafile internally uses a data model similar to GIT's. It consists of Repo
, Commit
, FS
, and Block
.
Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details.
"},{"location":"develop/data_model/#repo","title":"Repo","text":"A repo is also called a library. Every repo has an unique id (UUID), and attributes like description, creator, password.
The metadata for a repo is stored in seafile_db
database and the commit objects (see description in later section).
There are a few tables in the seafile_db
database containing important information about each repo.
Repo
: contains the ID for each repo.RepoOwner
: contains the owner id for each repo.RepoInfo
: it is a \"cache\" table for fast access to repo metadata stored in the commit object. It includes repo name, update time, last modifier.RepoSize
: the total size of all files in the repo.RepoFileCount
: the file count in the repo.RepoHead
: contains the \"head commit ID\". This ID points to the head commit in the storage, which will be described in the next section.Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID.
The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo.
The parent commit ID points to the last commit previous to the current commit. The RepoHead
table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/commits/<repo_id>
. If you use object storage, commit objects are stored in the commits
bucket.
There are two types of FS objects, SeafDir Object
and Seafile Object
. SeafDir Object
represents a directory, and Seafile Object
represents a file.
The SeafDir
object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir
or Seafile
object. The Seafile
object contains a block list, which is a list of block IDs for the file.
The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/fs/<repo_id>
. If you use object storage, commit objects are stored in the fs
bucket.
A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB.
This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/blocks/<repo_id>
. If you use object storage, commit objects are stored in the blocks
bucket.
A \"virtual repo\" is a special repo that will be created in the cases below:
A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same fs
and blocks
storage location as its parent.
Virtual repo has its own change history. So it has separate commits
storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.
There is a VirtualRepo
table in seafile_db
database. It contains the folder path in the parent repo for each virtual repo.
The following list is what you need to install on your development machine. You should install all of them before you build Seafile.
Package names are according to Ubuntu 14.04. For other Linux distros, please find their corresponding names yourself.
sudo apt-get install autoconf automake libtool libevent-dev libcurl4-openssl-dev libgtk2.0-dev uuid-dev intltool libsqlite3-dev valac libjansson-dev cmake qtchooser qtbase5-dev libqt5webkit5-dev qttools5-dev qttools5-dev-tools libssl-dev libargon2-dev\n
$ sudo yum install wget gcc libevent-devel openssl-devel gtk2-devel libuuid-devel sqlite-devel jansson-devel intltool cmake libtool vala gcc-c++ qt5-qtbase-devel qt5-qttools-devel qt5-qtwebkit-devel libcurl-devel openssl-devel argon2-devel\n
"},{"location":"develop/linux/#building","title":"Building","text":"First you should get the latest source of libsearpc/ccnet/seafile/seafile-client:
Download the source tarball of the latest tag from
For example, if the latest released seafile client is 8.0.0, then just use the v8.0.0 tags of the four projects. You should get four tarballs:
# without alias wget= might not work\nshopt -s expand_aliases\n\nexport version=8.0.0\nalias wget='wget --content-disposition -nc'\nwget https://github.com/haiwen/libsearpc/archive/v3.2-latest.tar.gz\nwget https://github.com/haiwen/ccnet/archive/v${version}.tar.gz \nwget https://github.com/haiwen/seafile/archive/v${version}.tar.gz\nwget https://github.com/haiwen/seafile-client/archive/v${version}.tar.gz\n
Now uncompress them:
tar xf libsearpc-3.2-latest.tar.gz\ntar xf ccnet-${version}.tar.gz\ntar xf seafile-${version}.tar.gz\ntar xf seafile-client-${version}.tar.gz\n
To build Seafile client, you need first build libsearpc and ccnet, seafile.
"},{"location":"develop/linux/#set-paths","title":"set paths","text":"export PREFIX=/usr\nexport PKG_CONFIG_PATH=\"$PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\n
"},{"location":"develop/linux/#libsearpc","title":"libsearpc","text":"cd libsearpc-3.2-latest\n./autogen.sh\n./configure --prefix=$PREFIX\nmake\nsudo make install\ncd ..\n
"},{"location":"develop/linux/#seafile","title":"seafile","text":"In order to support notification server, you need to build libwebsockets first.
git clone --branch=v4.3.0 https://github.com/warmcat/libwebsockets\ncd libwebsockets\nmkdir build\ncd build\ncmake ..\nmake\nsudo make install\ncd ..\n
You can set --enable-ws
to no to disable notification server. After that, you can build seafile:
cd seafile-${version}/\n./autogen.sh\n./configure --prefix=$PREFIX --disable-fuse\nmake\nsudo make install\ncd ..\n
"},{"location":"develop/linux/#seafile-client","title":"seafile-client","text":"cd seafile-client-${version}\ncmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PREFIX .\nmake\nsudo make install\ncd ..\n
"},{"location":"develop/linux/#custom-prefix","title":"custom prefix","text":"when installing to a custom $PREFIX
, i.e. /opt
, you may need a script to set the path variables correctly
cat >$PREFIX/bin/seafile-applet.sh <<END\n#!/bin/bash\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexec seafile-applet $@\nEND\ncat >$PREFIX/bin/seaf-cli.sh <<END\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexport PYTHONPATH=$PREFIX/lib/python2.7/site-packages\nexec seaf-cli $@\nEND\nchmod +x $PREFIX/bin/seafile-applet.sh $PREFIX/bin/seaf-cli.sh\n
you can now start the client with $PREFIX/bin/seafile-applet.sh
.
The following setups are required for building and packaging Sync Client on macOS:
universal_archs arm64 x86_64
. Specifies the architecture on which MapPorts is compiled.+universal
. MacPorts installs universal versions of all ports.sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets argon2
.export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/local/lib/pkgconfig:/usr/local/lib/pkgconfig\nexport PATH=/opt/local/bin:/usr/local/bin:/opt/local/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH\nexport LDFLAGS=\"-L/opt/local/lib -L/usr/local/lib\"\nexport CFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport CPPFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:/opt/local/lib/:/usr/local/lib/:$LD_LIBRARY_PATH\n\nQT_BASE=$HOME/Qt/6.2.4/macos\nexport PATH=$QT_BASE/bin:$PATH\nexport PKG_CONFIG_PATH=$QT_BASE/lib/pkgconfig:$PKG_CONFIG_PATH\nexport NOTARIZE_APPLE_ID=\"Your notarize account\"\nexport NOTARIZE_PASSWORD=\"Your notarize password\"\nexport NOTARIZE_TEAM_ID=\"Your notarize team id\"\n
Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\n
The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, and github.com/haiwen/seafile-client.
"},{"location":"develop/osx/#building","title":"Building","text":"Note: the building commands have been included in the packaging script, you can skip building commands while packaging.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ ./autogen.sh\n$ ./configure --disable-compile-demo --enable-compile-universal=yes\n$ make\n$ make install\n
To build seafile:
$ cd seafile-workspace/seafile/\n$ ./autogen.sh\n$ ./configure --disable-fuse --enable-compile-universal=yes\n$ make\n$ make install\n
To build seafile-client:
$ cd seafile-workspace/seafile-client/\n$ cmake -GXcode -B. -S.\n$ xcodebuild -target seafile-applet -configuration Release\n
"},{"location":"develop/osx/#packaging","title":"Packaging","text":"python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universal
From Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions.
The seafile-build.sh
compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
Old version is below:
Table of contents:
Requirements:
sudo apt-get install build-essential\nsudo apt-get install libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev re2c flex python-setuptools cmake\n
"},{"location":"develop/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"develop/rpi/#libevhtp","title":"libevhtp","text":"libevhtp is a http server libary on top of libevent. It's used in seafile file server.
git clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\n
After compiling all the libraries, run ldconfig
to update the system libraries cache:
sudo ldconfig\n
"},{"location":"develop/rpi/#install-python-libraries","title":"Install python libraries","text":"Create a new directory /home/pi/dev/seahub_thirdpart
:
mkdir -p ~/dev/seahub_thirdpart\n
Download these tarballs to /tmp/
:
Install all these libaries to /home/pi/dev/seahub_thirdpart
:
cd ~/dev/seahub_thirdpart\nexport PYTHONPATH=.\npip install -t ~/dev/seahub_thirdpart/ /tmp/pytz-2016.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/Django-1.8.10.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-statici18n-1.1.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/djangorestframework-3.3.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_compressor-1.4.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/jsonfield-1.0.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-post_office-2.0.6.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/gunicorn-19.4.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/flup-1.0.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/chardet-2.3.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/python-dateutil-1.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/six-1.9.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-picklefield-0.3.2.tar.gz\nwget -O /tmp/django_constance.zip https://github.com/haiwen/django-constance/archive/bde7f7c.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_constance.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/jdcal-1.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/et_xmlfile-1.0.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/openpyxl-2.3.0.tar.gz\n
"},{"location":"develop/rpi/#prepare-seafile-source-code","title":"Prepare seafile source code","text":"To build seafile server, there are four sub projects involved:
The build process has two steps:
build-server.py
script to build the server package from the source tarballs.Seafile manages the releases in tags on github.
Assume we are packaging for seafile server 6.0.1, then the tags are:
v6.0.1-sever
tag.v3.0-latest
tag (libsearpc has been quite stable and basically has no further development, so the tag is always v3.0-latest
)First setup the PKG_CONFIG_PATH
enviroment variable (So we don't need to make and make install libsearpc/ccnet/seafile into the system):
export PKG_CONFIG_PATH=/home/pi/dev/seafile/lib:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/libsearpc:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/ccnet:$PKG_CONFIG_PATH\n
"},{"location":"develop/rpi/#libsearpc","title":"libsearpc","text":"cd ~/dev\ngit clone https://github.com/haiwen/libsearpc.git\ncd libsearpc\ngit reset --hard v3.0-latest\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"develop/rpi/#ccnet","title":"ccnet","text":"cd ~/dev\ngit clone https://github.com/haiwen/ccnet-server.git\ncd ccnet\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"develop/rpi/#seafile","title":"seafile","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafile-server.git\ncd seafile\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"develop/rpi/#seahub","title":"seahub","text":"cd ~/dev\ngit clone https://github.com/haiwen/seahub.git\ncd seahub\ngit reset --hard v6.0.1-server\n./tools/gen-tarball.py --version=6.0.1 --branch=HEAD\n
"},{"location":"develop/rpi/#seafobj","title":"seafobj","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafobj.git\ncd seafobj\ngit reset --hard v6.0.1-server\nmake dist\n
"},{"location":"develop/rpi/#seafdav","title":"seafdav","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafdav.git\ncd seafdav\ngit reset --hard v6.0.1-server\nmake\n
"},{"location":"develop/rpi/#copy-the-source-tar-balls-to-the-same-folder","title":"Copy the source tar balls to the same folder","text":"mkdir ~/seafile-sources\ncp ~/dev/libsearpc/libsearpc-<version>-tar.gz ~/seafile-sources\ncp ~/dev/ccnet/ccnet-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seafile/seafile-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seahub/seahub-<version>-tar.gz ~/seafile-sources\n\ncp ~/dev/seafobj/seafobj.tar.gz ~/seafile-sources\ncp ~/dev/seafdav/seafdav.tar.gz ~/seafile-sources\n
"},{"location":"develop/rpi/#run-the-packaging-script","title":"Run the packaging script","text":"Now we have all the tarballs prepared, we can run the build-server.py
script to build the server package.
mkdir ~/seafile-server-pkgs\n~/dev/seafile/scripts/build-server.py --libsearpc_version=<libsearpc_version> --ccnet_version=<ccnet_version> --seafile_version=<seafile_version> --seahub_version=<seahub_version> --srcdir= --thirdpartdir=/home/pi/dev/seahub_thirdpart --srcdir=/home/pi/seafile-sources --outputdir=/home/pi/seafile-server-pkgs\n
After the script finisheds, we would get a seafile-server_6.0.1_pi.tar.gz
in ~/seafile-server-pkgs
folder.
The test should cover these steps at least:
seafile.sh start
and seahub.sh start
, you can login from a browser.This is the document for deploying Seafile open source development environment in Ubuntu 2204 docker container.
"},{"location":"develop/server/#create-persistent-directories","title":"Create persistent directories","text":"Login a linux server as root
user, then:
mkdir -p /root/seafile-ce-docker/source-code\nmkdir -p /root/seafile-ce-docker/conf\nmkdir -p /root/seafile-ce-docker/logs\nmkdir -p /root/seafile-ce-docker/mysql-data\nmkdir -p /root/seafile-ce-docker/seafile-data/library-template\n
"},{"location":"develop/server/#run-a-container","title":"Run a container","text":"After install docker, start a container to deploy seafile open source development environment.
docker run --mount type=bind,source=/root/seafile-ce-docker/source-code,target=/root/dev/source-code \\\n --mount type=bind,source=/root/seafile-ce-docker/conf,target=/root/dev/conf \\\n --mount type=bind,source=/root/seafile-ce-docker/logs,target=/root/dev/logs \\\n --mount type=bind,source=/root/seafile-ce-docker/seafile-data,target=/root/dev/seafile-data \\\n --mount type=bind,source=/root/seafile-ce-docker/mysql-data,target=/var/lib/mysql \\\n -it -p 8000:8000 -p 8082:8082 -p 3000:3000 --name seafile-ce-env ubuntu:22.04 bash\n
Note, the following commands are all executed in the seafile-ce-env docker container.
"},{"location":"develop/server/#update-source-and-install-dependencies","title":"Update Source and Install Dependencies.","text":"Update base system and install base dependencies:
apt-get update && apt-get upgrade -y\n\napt-get install -y ssh libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev python3-dateutil cmake re2c flex sqlite3 python3-pip python3-simplejson git libssl-dev libldap2-dev libonig-dev vim vim-scripts wget cmake gcc autoconf automake mysql-client librados-dev libxml2-dev curl sudo telnet netcat unzip netbase ca-certificates apt-transport-https build-essential libxslt1-dev libffi-dev libpcre3-dev libz-dev xz-utils nginx pkg-config poppler-utils libmemcached-dev sudo ldap-utils libldap2-dev libjwt-dev\n
Install Node 16 from nodesource:
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg\necho \"deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_16.x nodistro main\" | sudo tee /etc/apt/sources.list.d/nodesource.list\napt-get install -y nodejs\n
Install other Python 3 dependencies:
apt-get install -y python3 python3-dev python3-pip python3-setuptools python3-ldap\n\npython3 -m pip install --upgrade pip\n\npip3 install Django==4.2.* django-statici18n==2.3.* django_webpack_loader==1.7.* django_picklefield==3.1 django_formtools==2.4 django_simple_captcha==0.6.* djangosaml2==1.5.* djangorestframework==3.14.* python-dateutil==2.8.* pyjwt==2.6.* pycryptodome==3.16.* python-cas==1.6.* pysaml2==7.2.* requests==2.28.* requests_oauthlib==1.3.* future==0.18.* gunicorn==20.1.* mysqlclient==2.1.* qrcode==7.3.* pillow==10.2.* chardet==5.1.* cffi==1.15.1 captcha==0.5.* openpyxl==3.0.* Markdown==3.4.* bleach==5.0.* python-ldap==3.4.* sqlalchemy==2.0.18 redis mock pytest pymysql configparser pylibmc django-pylibmc nose exam splinter pytest-django\n
"},{"location":"develop/server/#install-mariadb-and-create-databases","title":"Install MariaDB and Create Databases","text":"apt-get install -y mariadb-server\nservice mariadb start\nmysqladmin -u root password your_password\n
sql for create databases
mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n
"},{"location":"develop/server/#download-source-code","title":"Download Source Code","text":"cd ~/\ncd ~/dev/source-code\n\ngit clone https://github.com/haiwen/libevhtp.git\ngit clone https://github.com/haiwen/libsearpc.git\ngit clone https://github.com/haiwen/seafile-server.git\ngit clone https://github.com/haiwen/seafevents.git\ngit clone https://github.com/haiwen/seafobj.git\ngit clone https://github.com/haiwen/seahub.git\n\ncd libevhtp/\ngit checkout tags/1.1.7 -b tag-1.1.7\n\ncd ../libsearpc/\ngit checkout tags/v3.3-latest -b tag-v3.3-latest\n\ncd ../seafile-server\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafevents\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafobj\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seahub\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n
"},{"location":"develop/server/#compile-and-install-seaf-server","title":"Compile and Install seaf-server","text":"cd ../libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nmake install\nldconfig\n\ncd ../libsearpc\n./autogen.sh\n./configure\nmake\nmake install\nldconfig\n\ncd ../seafile-server\n./autogen.sh\n./configure --disable-fuse\nmake\nmake install\nldconfig\n
"},{"location":"develop/server/#create-conf-files","title":"Create Conf Files","text":"cd ~/dev/conf\n\ncat > ccnet.conf <<EOF\n[Database]\nENGINE = mysql\nHOST = localhost\nPORT = 3306\nUSER = root\nPASSWD = 123456\nDB = ccnet\nCONNECTION_CHARSET = utf8\nCREATE_TABLES = true\nEOF\n\ncat > seafile.conf <<EOF\n[database]\ntype = mysql\nhost = localhost\nport = 3306\nuser = root\npassword = 123456\ndb_name = seafile\nconnection_charset = utf8\ncreate_tables = true\nEOF\n\ncat > seafevents.conf <<EOF\n[DATABASE]\ntype = mysql\nusername = root\npassword = 123456\nname = seahub\nhost = localhost\nEOF\n\ncat > seahub_settings.py <<EOF\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'seahub',\n 'USER': 'root',\n 'PASSWORD': '123456',\n 'HOST': 'localhost',\n 'PORT': '3306',\n }\n}\nFILE_SERVER_ROOT = 'http://127.0.0.1:8082'\nSERVICE_URL = 'http://127.0.0.1:8000'\nEOF\n
"},{"location":"develop/server/#start-seaf-server","title":"Start seaf-server","text":"seaf-server -F /root/dev/conf -d /root/dev/seafile-data -l /root/dev/logs/seafile.log >> /root/dev/logs/seafile.log 2>&1 &\n
"},{"location":"develop/server/#start-seafevents-and-seahub","title":"Start seafevents and seahub","text":""},{"location":"develop/server/#prepare-environment-variables","title":"Prepare environment variables","text":"export CCNET_CONF_DIR=/root/dev/conf\nexport SEAFILE_CONF_DIR=/root/dev/seafile-data\nexport SEAFILE_CENTRAL_CONF_DIR=/root/dev/conf\nexport SEAHUB_DIR=/root/dev/source-code/seahub\nexport SEAHUB_LOG_DIR=/root/dev/logs\nexport PYTHONPATH=/usr/local/lib/python3.10/dist-packages/:/usr/local/lib/python3.10/site-packages/:/root/dev/source-code/:/root/dev/source-code/seafobj/:/root/dev/source-code/seahub/thirdpart:$PYTHONPATH\n
"},{"location":"develop/server/#start-seafevents","title":"Start seafevents","text":"cd /root/dev/source-code/seafevents/\npython3 main.py --loglevel=debug --logfile=/root/dev/logs/seafevents.log --config-file /root/dev/conf/seafevents.conf >> /root/dev/logs/seafevents.log 2>&1 &\n
"},{"location":"develop/server/#start-seahub","title":"Start seahub","text":""},{"location":"develop/server/#create-seahub-database-tables","title":"Create seahub database tables","text":"cd /root/dev/source-code/seahub/\npython3 manage.py migrate\n
"},{"location":"develop/server/#create-user","title":"Create user","text":"python3 manage.py createsuperuser\n
"},{"location":"develop/server/#start-seahub_1","title":"Start seahub","text":"python3 manage.py runserver 0.0.0.0:8000\n
Then, you can visit http://127.0.0.1:8000/ to use Seafile.
"},{"location":"develop/server/#the-final-directory-structure","title":"The Final Directory Structure","text":""},{"location":"develop/server/#more","title":"More","text":""},{"location":"develop/server/#deploy-frontend-development-environment","title":"Deploy Frontend Development Environment","text":"For deploying frontend development enviroment, you need:
1, checkout seahub to master branch
cd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\n
2, add the following configration to /root/dev/conf/seahub_settings.py
import os\nPROJECT_ROOT = '/root/dev/source-code/seahub'\nWEBPACK_LOADER = {\n 'DEFAULT': {\n 'BUNDLE_DIR_NAME': 'frontend/',\n 'STATS_FILE': os.path.join(PROJECT_ROOT,\n 'frontend/webpack-stats.dev.json'),\n }\n}\nDEBUG = True\n
3, install js modules
cd /root/dev/source-code/seahub/frontend\n\nnpm install\n
4, npm run dev
cd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n
5, start seaf-server and seahub
"},{"location":"develop/translation/","title":"Translation","text":""},{"location":"develop/translation/#seahub-seafile-server-71-and-above","title":"Seahub (Seafile Server 7.1 and above)","text":""},{"location":"develop/translation/#translate-and-try-locally","title":"Translate and try locally","text":"1. Locate the translation files in the seafile-server-latest/seahub directory:
/locale/<lang-code>/LC_MESSAGES/django.po
\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po
/media/locales/<lang-code>/seafile-editor.json
For example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files:
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/django.po
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/djangojs.po
/seafile-server-latest/seahub/media/locales/ru/seafile-editor.json
If there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.)
2. Edit the files using an UTF-8 editor.
3. Save your changes.
4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the /seafile-server-latest/seahub/seahub/settings.py
file and save it.
LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n
5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in /seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES
:
msgfmt -o django.mo django.po
msgfmt -o djangojs.mo djangojs.po
Note: msgfmt is included in the gettext package.
Additionally, run the following two commands in the seafile-server-latest directory:
./seahub.sh python-env python3 seahub/manage.py compilejsi18n -l <lang-code>
./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process
6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file.
"},{"location":"develop/translation/#submit-your-translation","title":"Submit your translation","text":"Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/
Steps:
FileNotFoundError
occurred when executing the command manage.py collectstatic
.
FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n
Steps:
Modify STATICFILES_DIRS
in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py
manually
STATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n '%s/static' % PROJECT_ROOT,\n# '%s/frontend/build' % PROJECT_ROOT,\n)\n
Execute the command
./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process\n
Restore STATICFILES_DIRS
manually
```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, )
Restart Seahub
./seahub.sh restart\n
This issue has been fixed since version 11.0
"},{"location":"develop/web_api_v2.1/","title":"Web API","text":""},{"location":"develop/web_api_v2.1/#seafile-web-api","title":"Seafile Web API","text":"The API document can be accessed in the following location:
The Admin API document can be accessed in the following location:
The following setups are required for building and packaging Sync Client on Windows:
vcpkg
# Example of the install command:\n$ ./vcpkg.exe install curl[core,openssl]:x64-windows\n
Python 3.7
Certificates
Note: certificates for Windows application are issued by third-party certificate authority.
Support for Breakpad can be added by running following steps:
install gyp tool
$ git clone --depth=1 git@github.com:chromium/gyp.git\n$ python setup.py install\n
compile breakpad
$ git clone --depth=1 git@github.com:google/breakpad.git\n$ cd breakpad\n$ git clone https://github.com/google/googletest.git testing\n$ cd ..\n# create vs solution, this may throw an error \"module collections.abc has no attribute OrderedDict\", you should open the msvs.py and replace 'collections.abc' with 'collections'.\n$ gyp \u2013-no-circular-check breakpad\\src\\client\\windows\\breakpad_client.gyp\n
compile dump_syms tool
create vs solution
gyp \u2013-no-circular-check breakpad\\src\\tools\\windows\\tools_windows.gyp\n
copy VC merge modules
copy C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Redist\\MSVC\\v142\\MergeModules\\MergeModules\\Microsoft_VC142_CRT_x64.msm C:\\packagelib\n
Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\nseafile-workspace/seafile-shell-ext/\n
The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext.
"},{"location":"develop/windows/#building","title":"Building","text":"Note: the building commands have been included in the packaging script, you can skip building commands while packaging.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ devenv libsearpc.sln /build \"Release|x64\"\n
To build seafile
$ cd seafile-workspace/seafile/\n$ devenv seafile.sln /build \"Release|x64\"\n$ devenv msi/custom/seafile_custom.sln /build \"Release|x64\"\n
To build seafile-client
$ cd seafile-workspace/seafile-client/\n$ devenv third_party/quazip/quazip.sln /build \"Release|x64\"\n$ devenv seafile-client.sln /build \"Release|x64\"\n
To build seafile-shell-ext
$ cd seafile-workspace/seafile-shell-ext/\n$ devenv extensions/seafile_ext.sln /build \"Release|x64\"\n$ devenv seadrive-thumbnail-ext/seadrive_thumbnail_ext.sln /build \"Release|x64\"\n
"},{"location":"develop/windows/#packaging","title":"Packaging","text":"$ cd seafile-workspace/seafile-client/third_party/quazip\n$ devenv quazip.sln /build Release|x64\n$ cd seafile-workspace/seafile/scripts/build\n$ python build-msi-vs.py 1.0.0\n
If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows:
"},{"location":"extension/distributed_indexing/#install-redis-and-modify-configuration-files","title":"Install redis and modify configuration files","text":""},{"location":"extension/distributed_indexing/#1-install-redis-on-all-frontend-nodes","title":"1. Install redis on all frontend nodes","text":"Tip
If you use redis cloud service, skip this step and modify the configuration files directly
UbuntuCentOS$ apt install redis-server\n
$ yum install redis\n
"},{"location":"extension/distributed_indexing/#2-install-python-redis-third-party-package-on-all-frontend-nodes","title":"2. Install python redis third-party package on all frontend nodes","text":"$ pip install redis\n
"},{"location":"extension/distributed_indexing/#3-modify-the-seafeventsconf-on-all-frontend-nodes","title":"3. Modify the seafevents.conf
on all frontend nodes","text":"Add the following config items
[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
"},{"location":"extension/distributed_indexing/#4-modify-the-seafeventsconf-on-the-backend-node","title":"4. Modify the seafevents.conf
on the backend node","text":"Disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.
[INDEX FILES]\nenabled=true\n |\n V\nenabled=false \n
"},{"location":"extension/distributed_indexing/#5-restart-seafile","title":"5. Restart Seafile","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart && ./seahub.sh restart\n
cd /opt/seafile/seafile-server-latest\n./seafile.sh restart && ./seahub.sh restart\n
"},{"location":"extension/distributed_indexing/#deploy-distributed-indexing","title":"Deploy distributed indexing","text":"First, prepare a seafes master node and several seafes slave nodes, the number of slave nodes depends on your needs. Deploy Seafile on these nodes, and copy the configuration files in the conf
directory from the frontend nodes. The master node and slave nodes do not need to start Seafile, but need to read the configuration files to obtain the necessary information.
Next, create a configuration file index-master.conf
in the conf
directory of the master node, e.g.
[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Execute ./run_index_master.sh [start/stop/restart]
in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container) to control the program to start, stop and restart.
Next, create a configuration file index-slave.conf
in the conf
directory of all slave nodes, e.g.
[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Execute ./run_index_worker.sh [start/stop/restart]
in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container) to control the program to start, stop and restart.
Note
The index worker connects to backend storage directly. You don't need to run seaf-server in index worker node.
"},{"location":"extension/distributed_indexing/#some-commands-in-distributed-indexing","title":"Some commands in distributed indexing","text":"Rebuild search index, execute in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container):
$ ./pro/pro.py search --clear\n$ ./run_index_master.sh python-env index_op.py --mode resotre_all_repo\n
List the number of indexing tasks currently remaining, execute in the seafile-server-last
directory (or /scripts
inner the Seafile-docker container):
$ ./run_index_master.sh python-env index_op.py --mode show_all_task\n
The above commands need to be run on the master node.
"},{"location":"extension/fuse/","title":"FUSE extension","text":"Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse
is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Note
Assume we want to mount to /opt/seafile-fuse
in host.
Add the following content
seafile:\n ...\n volumes:\n ...\n - /opt/seafile-fuse: /seafile-fuse\n privileged: true\n cap_add:\n - SYS_ADMIN\n
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script-in-docker","title":"Start seaf-fuse with the script in docker","text":"Start Seafile server and enter the container
docker compose up -d\n\ndocker exec -it seafile bash\n
Start seaf-fuse in the container
cd /opt/seafile/seafile-server-latest/\n\n./seaf-fuse.sh start /seafile-fuse\n
"},{"location":"extension/fuse/#use-seaf-fuse-in-binary-based-deployment","title":"Use seaf-fuse in binary based deployment","text":"Assume we want to mount to /data/seafile-fuse
.
mkdir -p /data/seafile-fuse\n
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"Before start seaf-fuse, you should have started seafile server with ./seafile.sh start
./seaf-fuse.sh start /data/seafile-fuse\n
seaf-fuse supports standard mount options for FUSE. For example, you can specify ownership for the mounted folder:
./seaf-fuse.sh start -o uid=<uid> /data/seafile-fuse\n
The fuse enables the block cache function by default to cache block objects, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options:
./seaf-fuse.sh start --disable-block-cache /data/seafile-fuse\n
You can find the complete list of supported options in man fuse
.
./seaf-fuse.sh stop\n
"},{"location":"extension/fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"extension/fuse/#the-top-level-folder","title":"The top level folder","text":"Now you can list the content of /data/seafile-fuse
.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n
$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"extension/fuse/#the-folder-for-a-library","title":"The folder for a library","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n
"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start
, most likely you are not in the \"fuse group\". You should:
Add yourself to the fuse group
sudo usermod -a -G fuse <your-user-name>\n
Logout your shell and login again
./seaf-fuse.sh start <path>
again.Deployment Tips
The steps from this guide only cover installing collabora as another container on the same docker host that your seafile docker container is on. Please make sure your host have sufficient cores and RAM.
If you want to install on another host please refer the collabora documentation for instructions. Then you should follow here to configure seahub_settings.py
to enable online office.
Note
To integrate LibreOffice with Seafile, you have to enable HTTPS in your Seafile server:
Deploy in DockerDeploy from binary packagesModify .env
file:
SEAFILE_SERVER_PROTOCOL=https\n
Please follow the links to enable https:
Download the collabora.yml
wget https://manual.seafile.com/12.0/docker/collabora.yml\n
Insert collabora.yml
to field COMPOSE_FILE
lists (i.e., COMPOSE_FILE='...,collabora.yml'
) and add the relative options in .env
COLLABORA_IMAGE=collabora/code:24.04.5.1.1 # image of LibreOffice\nCOLLABORA_PORT=6232 # expose port\nCOLLABORA_USERNAME=<your LibreOffice admin username>\nCOLLABORA_PASSWORD=<your LibreOffice admin password>\nCOLLABORA_ENABLE_ADMIN_CONSOLE=true # enable admin console or not\nCOLLABORA_REMOTE_FONT= # remote font url\nCOLLABORA_ENABLE_FILE_LOGGING=false # use file logs or not, see FQA\n
"},{"location":"extension/libreoffice_online/#config-seafile","title":"Config Seafile","text":"Add following config option to seahub_settings.py:
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'https://seafile.example.com:6232/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n
Then restart Seafile.
Click an office file in Seafile web interface, you will see the online preview rendered by CollaboraOnline. Here is an example:
"},{"location":"extension/libreoffice_online/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the integration work will help you debug the problem. When a user visits a file page:
CollaboraOnline container will output the logs in the stdout, you can use following command to access it
docker logs seafile-collabora\n
If you would like to use file to save log (i.e., a .log
file), you can modify .env
with following statment, and remove the notes in the collabora.yml
# .env\nCOLLABORA_ENABLE_FILE_LOGGING=True\nCOLLABORA_PATH=/opt/collabora # path of the collabora logs\n
# collabora.yml\n# remove the following notes\n...\nservices:\n collabora:\n ...\n volumes:\n - \"${COLLABORA_PATH:-/opt/collabora}/logs:/opt/cool/logs/\" # chmod 777 needed\n ...\n...\n
Create the logs directory, and restart Seafile server
mkdir -p /opt/collabora\nchmod 777 /opt/collabora\ndocker compose down\ndocker compose up -d\n
"},{"location":"extension/libreoffice_online/#collaboraonline-server-on-a-separate-host","title":"CollaboraOnline server on a separate host","text":"If your CollaboraOnline server on a separate host, you just need to modify the seahub_settings.py
similar to deploy on the same host. The only different is you have to change the field OFFICE_WEB_APP_BASE_URL
to your CollaboraOnline host (e.g., https://collabora-online.seafile.com/hosting/discovery
).
Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server.
When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed.
The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server.
"},{"location":"extension/notification-server/#supported-update-reminder-types","title":"Supported update reminder types","text":"Since Seafile 12.0, we use a separate Docker image to deploy the notification server. First download notification-server.yml
to Seafile directory:
wget https://manual.seafile.com/12.0/docker/notification-server.yml\n
Modify .env
, and insert notification-server.yml
into COMPOSE_FILE
:
COMPOSE_FILE='seafile-server.yml,caddy.yml,notification-server.yml'\n
And you need to add the following configurations under seafile.conf:
[notification]\nenabled = true\n# the ip of notification server. (default is `notification-server` in Docker)\nhost = notification-server\n# the port of notification server\nport = 8083\n
You can run notification server with the following command:
docker compose down\ndocker compose up -d\n
"},{"location":"extension/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"When the notification server is working, you can access http://127.0.0.1:8083/ping
from your browser, which will answer {\"ret\": \"pong\"}
. If you have a proxy configured, you can access https://{server}/notification/ping
from your browser instead.
If the client works with notification server, there should be a log message in seafile.log or seadrive.log.
Notification server is enabled on the remote server xxxx\n
"},{"location":"extension/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"There is no additional features for notification server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env
and notification-server.yml
to notification server directory:
wget https://manual.seafile.com/12.0/docker/notification-server/standalone/notification-server.yml\nwget -O .env https://manual.seafile.com/12.0/docker/notification-server/standalone/env\n
Then modify the .env
file according to your environment. The following fields are needed to be modified:
NOTIFICATION_SERVER_VOLUME
The volume directory of notification server data SEAFILE_MYSQL_DB_HOST
Seafile MySQL host SEAFILE_MYSQL_DB_USER
Seafile MySQL user, default is seafile
SEAFILE_MYSQL_DB_PASSWORD
Seafile MySQL password TIME_ZONE
Time zone JWT_PRIVATE_KEY
JWT key, the same as the config in Seafile .env
file SEAFILE_SERVER_HOSTNAME
Seafile host name SEAFILE_SERVER_PROTOCOL
http or https You can run notification server with the following command:
docker compose up -d\n
And you need to add the following configurations under seafile.conf
and restart Seafile server:
[notification]\nenabled = true\n# the ip of notification server.\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
You need to configure load balancer according to the following forwarding rules:
/notification/ping
requests to notification server via http protocol./notification
to notification server.Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\n
"},{"location":"extension/office_web_app/","title":"Office Online Server","text":"In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx.
Seafile only supports Office Online Server 2016 and above
To use Office Online Server for preview, please add following config option to seahub_settings.py.
# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n
Then restart
./seafile.sh restart\n./seahub.sh restart\n
After you click the document you specified in seahub_settings.py, you will see the new preview page.
"},{"location":"extension/office_web_app/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page:
Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong.
Warning
You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests.
"},{"location":"extension/only_office/","title":"OnlyOffice","text":"Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server.
Deployment Tips
You can deploy OnlyOffice to the same machine as Seafile (only support deploying with Docker with sufficient cores and RAM) using the onlyoffice.yml
provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
Download the onlyoffice.yml
wget https://manual.seafile.com/12.0/docker/onlyoffice.yml\n
insert onlyoffice.yml
into COMPOSE_FILE
list (i.e., COMPOSE_FILE='...,onlyoffice.yml'
), and add the following configurations of onlyoffice in .env
file.
# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\n
Note
From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable. Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command
pwgen -s 40 1\n
Also modify seahub_settings.py
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n
Tip
By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying ONLYOFFICE_PORT
, and port in the term ONLYOFFICE_APIJS_URL
in seahub_settings.py
has been modified together.
The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called local-production-linux.json
to force some settings.
nano local-production-linux.json\n
For example, you can configure OnlyOffice to automatically save by copying the following code block in this file:
{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 3\n }\n }\n}\n
Mount this config file into your onlyoffice block in onlyoffice.yml
:
service:\n ...\n onlyoffice:\n ...\n volumes:\n ...\n - <Your path to local-production-linux.json>:/etc/onlyoffice/documentserver/local-production-linux.json\n...\n
For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters
"},{"location":"extension/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"docker-compose down\ndocker-compose up -d\n
Success
After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: http{s}://{your Seafile server's domain or IP}:6233/welcome
, you will get Document Server is running info at this page.
Firstly, run docker logs -f seafile-onlyoffice
, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:
==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\n
If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905)
So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem.
"},{"location":"extension/only_office/#the-document-security-token-is-not-correctly-formed","title":"The document security token is not correctly formed","text":"Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server.
So, for security reason, please Configure OnlyOffice to use JWT Secret.
"},{"location":"extension/only_office/#onlyoffice-on-a-separate-host-and-url","title":"OnlyOffice on a separate host and URL","text":"In general, you only need to specify the values \u200b\u200bof the following fields in seahub_settings.py
and then restart the service.
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'http{s}://<Your OnlyOffice host url>/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n
"},{"location":"extension/only_office/#about-ssl","title":"About SSL","text":"For deployments using the onlyoffice.yml
file in this document, SSL is primarily handled by the Caddy
. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
SeaDoc is an extension of Seafile that providing an online collaborative document editor.
SeaDoc designed around the following key ideas:
SeaDoc excels at:
The SeaDoc archticture is demonstrated as below:
Here is the workflow when a user open sdoc file in browser
The easiest way to deployment SeaDoc is to deploy it with Seafile server on the same host using the same Docker network. If in some situations, you need to deployment SeaDoc standalone, you can follow the next section.
Download the seadoc.yml
to /opt/seafile
wget https://manual.seafile.com/12.0/docker/seadoc.yml\n
Modify .env
, and insert seadoc.yml
into COMPOSE_FILE
, and enable SeaDoc server
COMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nENABLE_SEADOC=true\nSEADOC_SERVER_URL=https://seafile.example.com/sdoc-server\n
Start SeaDoc server server with the following command
docker compose up -d\n
Now you can use SeaDoc!
"},{"location":"extension/setup_seadoc/#deploy-seadoc-standalone","title":"Deploy SeaDoc standalone","text":"If you deploy Seafile in a cluster or if you deploy Seafile with binary package, you need to setup SeaDoc as a standalone service. Here are the steps:
Download and modify the .env
and seadoc.yml
files to directory /opt/seadoc
wget https://manual.seafile.com/12.0/docker/seadoc/1.0/standalone/seadoc.yml\nwget -O .env https://manual.seafile.com/12.0/docker/seadoc/1.0/standalone/env\n
Then modify the .env
file according to your environment. The following fields are needed to be modified:
SEADOC_VOLUME
The volume directory of SeaDoc data SEAFILE_MYSQL_DB_HOST
Seafile MySQL host SEAFILE_MYSQL_DB_USER
Seafile MySQL user, default is seafile
SEAFILE_MYSQL_DB_PASSWORD
Seafile MySQL password TIME_ZONE
Time zone JWT_PRIVATE_KEY
JWT key, the same as the config in Seafile .env
file SEAFILE_SERVER_HOSTNAME
Seafile host name SEAFILE_SERVER_PROTOCOL
http or https (Optional) By default, SeaDoc server will bind to port 80 on the host machine. If the port is already taken by another service, you have to change the listening port of SeaDoc:
Modify seadoc.yml
services:\n seadoc:\n ...\n ports:\n - \"<your SeaDoc server port>:80\"\n...\n
Add a reverse proxy for SeaDoc server. In cluster environtment, it means you need to add reverse proxy rules at load balance. Here, we use Nginx as an example (please replace 127.0.0.1:80
to host:port
of your Seadoc server)
...\nserver {\n ...\n\n location /sdoc-server/ {\n proxy_pass http://127.0.0.1:80/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n client_max_body_size 100m;\n }\n\n location /socket.io {\n proxy_pass http://127.0.0.1:80;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n }\n}\n
Start SeaDoc server server with the following command
docker compose up -d\n
Modify Seafile server's configuration and start SeaDoc server
Warning
After using a reverse proxy, your SeaDoc service will be located at the /sdoc-server
path of your reverse proxy (i.e. xxx.example.com/sdoc-server
). For example:
Then SEADOC_SERVER_URL
will be
http{s}://xxx.example.com/sdoc-server\n
Modify .env
in your Seafile-server host:
ENABLE_SEADOC=true\nSEADOC_SERVER_URL=https://seafile.example.com/sdoc-server\n
Restart Seafile server
Deploy in Docker (including cluster mode)Deploy from binary packagesdocker compose down\ndocker compose up -d\n
cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n
/opt/seadoc-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
SeaDoc used one database table seahub_db.sdoc_operation_log
to store operation logs.
Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux.
To enable this feature, add the following options to seafile.conf
:
[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n
More details about the options:
An example for ClamAV (http://www.clamav.net/) is provided below:
[virus_scan]\nscan_command = clamscan\nvirus_code = 1\nnonvirus_code = 0\n
To test whether your configuration works, you can trigger a scan manually:
cd seafile-server-latest\n./pro/pro.py virus_scan\n
If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area.
Note
If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon
When run Clamav as a daemon, the scan_command
should be clamdscan
in seafile.conf
. An example for Clamav-daemon is provided below:
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\n
Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan.
[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n
The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored:
.bmp, .gif, .ico, .png, .jpg, .mp3, .mp4, .wav, .avi, .rmvb, .mkv\n
The list you provide will override default list.
"},{"location":"extension/virus_scan/#scanning-files-on-upload","title":"Scanning Files on Upload","text":"You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration.
You may scan files uploaded from shared upload links by adding the option below to seahub_settings.py
:
ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n
Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to seafile.conf
:
[fileserver]\ncheck_virus_on_web_upload = true\n
"},{"location":"extension/virus_scan_with_clamav/","title":"Deploy ClamAV with Seafile","text":""},{"location":"extension/virus_scan_with_clamav/#deploy-with-docker","title":"Deploy with Docker","text":"If your Seafile server is deployed using Docker, we also recommend that you use Docker to deploy ClamAV by following the steps below, otherwise you can deploy it from binary package of ClamAV.
"},{"location":"extension/virus_scan_with_clamav/#download-clamavyml-and-insert-to-docker-compose-lists-in-env","title":"Download clamav.yml and insert to Docker-compose lists in .env","text":"Download clamav.yml
wget https://manual.seafile.com/12.0/docker/pro/clamav.yml\n
Modify .env
, insert clamav.yml
to field COMPOSE_FILE
COMPOSE_FILE='seafile-server.yml,caddy.yml,clamav.yml'\n
"},{"location":"extension/virus_scan_with_clamav/#modify-seafileconf","title":"Modify seafile.conf","text":"Add the following statements to seafile.conf
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = 5\nscan_size_limit = 20\nthreads = 2\n
"},{"location":"extension/virus_scan_with_clamav/#restart-docker-container","title":"Restart docker container","text":"docker compose down\ndocker compose up -d \n
Wait some minutes until Clamav finished initializing.
Now Clamav can be used.
"},{"location":"extension/virus_scan_with_clamav/#use-clamav-in-binary-based-deployment","title":"Use ClamAV in binary based deployment","text":""},{"location":"extension/virus_scan_with_clamav/#install-clamav-daemon-clamav-freshclam","title":"Install clamav-daemon & clamav-freshclam","text":"apt-get install clamav-daemon clamav-freshclam\n
You should run Clamd with a root permission to scan any files. Edit the conf /etc/clamav/clamd.conf
,change the following line:
LocalSocketGroup root\nUser root\n
"},{"location":"extension/virus_scan_with_clamav/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"systemctl start clamav-daemon\n
Test the software
$ curl https://secure.eicar.org/eicar.com.txt | clamdscan -\n
The output must include:
stream: Eicar-Test-Signature FOUND\n
"},{"location":"extension/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"extension/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine.
If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers:
<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\n
"},{"location":"extension/virus_scan_with_kav4fs/#script","title":"Script","text":"As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result.
Save following contents to a file such as kav4fs_scan.sh
:
#!/bin/bash\n\nTEMP_LOG_FILE=`mktemp /tmp/XXXXXXXXXX`\nVIRUS_FOUND=1\nCLEAN=0\nUNDEFINED=2\nKAV4FS='/opt/kaspersky/kav4fs/bin/kav4fs-control'\nif [ ! -x $KAV4FS ]\nthen\n echo \"Binary not executable\"\n exit $UNDEFINED\nfi\n\nsudo $KAV4FS --scan-file \"$1\" > $TEMP_LOG_FILE\nif [ \"$?\" -ne 0 ]\nthen\n echo \"Error due to check file '$1'\"\n exit 3\nfi\nTHREATS_C=`grep 'Threats found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nRISKWARE_C=`grep 'Riskware found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nINFECTED=`grep 'Infected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSUSPICIOUS=`grep 'Suspicious:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSCAN_ERRORS_C=`grep 'Scan errors:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nPASSWORD_PROTECTED=`grep 'Password protected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nCORRUPTED=`grep 'Corrupted:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\n\nrm -f $TEMP_LOG_FILE\n\nif [ $THREATS_C -gt 0 -o $RISKWARE_C -gt 0 -o $INFECTED -gt 0 -o $SUSPICIOUS -gt 0 ]\nthen\n exit $VIRUS_FOUND\nelif [ $SCAN_ERRORS_C -gt 0 -o $PASSWORD_PROTECTED -gt 0 -o $CORRUPTED -gt 0 ]\nthen\n exit $UNDEFINED\nelse\n exit $CLEAN\nfi\n
Grant execute permissions for the script (make sure it is owned by the user Seafile is running as):
chmod u+x kav4fs_scan.sh\n
The meaning of the script return code:
1: found virus\n0: no virus\nother: scan failed\n
"},{"location":"extension/virus_scan_with_kav4fs/#configuration","title":"Configuration","text":"Add following content to seafile.conf
:
[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n
"},{"location":"extension/webdav/","title":"WebDAV extension","text":"In the document below, we assume your seafile installation folder is /opt/seafile
.
The configuration file is /opt/seafile-data/seafile/conf/seafdav.conf
(for deploying from binary packages, it should be /opt/seafile/conf/seafdav.conf
). If it is not created already, you can just create the file.
[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n
Every time the configuration is modified, you need to restart seafile server to make it take effect.
Deploy in DockerDeploy from binary packagesdocker compose restart\n
cd /opt/seafile/seafile-server-latest/\n./seafile.sh restart\n
Your WebDAV client would visit the Seafile WebDAV server at http{s}://example.com/seafdav/
(for deploying from binary packages, it should be http{s}://example.com:8080/seafdav/
)
In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV.
show_repo_id=true\n
"},{"location":"extension/webdav/#proxy-only-for-deploying-from-binary-packages","title":"Proxy (only for deploying from binary packages)","text":"Tip
For deploying in Docker, the WebDAV server has been proxied in /seafdav/*
, as you can skip this step
For Seafdav, the configuration of Nginx is as follows:
.....\n\n location /seafdav {\n rewrite ^/seafdav$ /seafdav/ permanent;\n }\n\n location /seafdav/ {\n proxy_pass http://127.0.0.1:8080/seafdav/;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\ufeff\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n\n location /:dir_browser {\n proxy_pass http://127.0.0.1:8080/:dir_browser;\n }\n
For Seafdav, the configuration of Apache is as follows:
......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive).
So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead.
WindowsLinuxMac OS XWindows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store.
On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line.
To use davfs2
sudo apt-get install davfs2\nsudo mount -t davfs -o uid=<username> https://example.com/seafdav /media/seafdav/\n
The -o option sets the owner of the mounted directory to so that it's writable for non-root users.
It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf
use_locks 0\n
Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck.
"},{"location":"extension/webdav/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"extension/webdav/#clients-cant-connect-to-seafdav-server","title":"Clients can't connect to seafdav server","text":"By default, seafdav is disabled. Check whether you have enabled = true
in seafdav.conf
. If not, modify it and restart seafile server.
If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of share_name
as the sample configuration above. Restart your seafile server and try again.
First, check the seafdav.log
to see if there is log like the following.
\"MOVE ... -> 502 Bad Gateway\n
If you have enabled debug, there will also be the following log.
09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\n
This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the HTTP_X_FORWARDED_PROTO
value in the request received by Seafile not being HTTPS.
You can solve this by manually changing the value of HTTP_X_FORWARDED_PROTO
. For example, in nginx, change
proxy_set_header X-Forwarded-Proto $scheme;\n
to
proxy_set_header X-Forwarded-Proto https;\n
"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder.
This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named FileSizeLimitInBytes
under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters
.
Seafile Server consists of the following two components:
seaf-server
)\uff1adata service daemon, handles raw file upload, download and synchronization. Seafile server by default listens on port 8082. You can configure Nginx/Apache to proxy traffic to the local 8082 port.The picture below shows how Seafile clients access files when you configure Seafile behind Nginx/Apache.
Tip
All access to the Seafile service (including Seahub and Seafile server) can be configured behind Nginx or Apache web server. This way all network traffic to the service can be encrypted with HTTPS.
"},{"location":"introduction/contribution/","title":"Contribution","text":""},{"location":"introduction/contribution/#licensing","title":"Licensing","text":"The different components of Seafile project are released under different licenses:
Forum: https://forum.seafile.com
Follow us @seafile https://twitter.com/seafile
"},{"location":"introduction/contribution/#report-a-bug","title":"Report a Bug","text":"Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write.
"},{"location":"introduction/file_permission_management/#read-only-syncing","title":"Read-only syncing","text":"Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications.
"},{"location":"introduction/file_permission_management/#cascading-permissionsub-folder-permissions-pro-edition","title":"Cascading permission/Sub-folder permissions (Pro edition)","text":"Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders.
Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups.
Note
Please check https://www.seafile.com/en/roadmap/
"},{"location":"outdate/change_default_java/","title":"Change default java","text":"When you have both Java 6 and Java 7 installed, the default Java may not be Java 7.
Do this by typing java -version
, and check the output.
If the default Java is Java 6, then do
On Debian/Ubuntu:
sudo update-alternatives --config java\n
On CentOS/RHEL:
sudo alternatives --config java\n
The above command will ask you to choose one of the installed Java versions as default. You should choose Java 7 here.
After that, re-run java -version
to make sure the change has taken effect.
Reference link
"},{"location":"outdate/kerberos_config/","title":"Kerberos config","text":""},{"location":"outdate/kerberos_config/#kerberos","title":"Kerberos","text":"NOTE: Since version 7.0, this documenation is deprecated. Users should use Apache as a proxy server for Kerberos authentication. Then configure Seahub by the instructions in Remote User Authentication.
Kerberos is a widely used single sign on (SSO) protocol. Seafile server supports authentication via Kerberos. It allows users to log in to Seafile without entering credentials again if they have a kerberos ticket.
In this documentation, we assume the reader is familiar with Kerberos installation and configuration.
Seahub provides a special URL to handle Kerberos login. The URL is https://your-server/krb5-login
. Only this URL needs to be configured under Kerberos protection. All other URLs don't go through the Kerberos module. The overall workflow for a user to login with Kerberos is as follows:
https://your-server/krb5-login
.The configuration includes three steps:
Store the keytab under the name defined below and make it accessible only to the apache user (e.g. httpd or www-data and chmod 600).
"},{"location":"outdate/kerberos_config/#apache-configuration","title":"Apache Configuration","text":"You should create a new location in your virtual host configuration for Kerberos.
<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n...\n <Location /krb5-login/>\n SSLRequireSSL\n AuthType Kerberos\n AuthName \"Kerberos EXAMPLE.ORG\"\n KrbMethodNegotiate On\n KrbMethodK5Passwd On\n Krb5KeyTab /etc/apache2/conf.d/http.keytab\n #ErrorDocument 401 '<html><meta http-equiv=\"refresh\" content=\"0; URL=/accounts/login\"><body>Kerberos authentication did not pass.</body></html>'\n Require valid-user\n </Location>\n...\n </VirtualHost>\n</IfModule>\n
After restarting Apache, you should see in the Apache logs that user@REALM is used when accessing https://seafile.example.com/krb5-login/.
"},{"location":"outdate/kerberos_config/#configure-seahub","title":"Configure Seahub","text":"Seahub extracts the username from the REMOTE_USER
environment variable.
Now we have to tell Seahub what to do with the authentication information passed in by Kerberos.
Add the following option to seahub_settings.py.
ENABLE_KRB5_LOGIN = True\n
"},{"location":"outdate/kerberos_config/#verify","title":"Verify","text":"After restarting Apache and Seafile services, you can test the Kerberos login workflow.
"},{"location":"outdate/outlook_addin_config/","title":"SSO for Seafile Outlook Add-in","text":"The Seafile Add-in for Outlook natively supports authentication via username and password. In order to authenticate with SSO, the add-in utilizes SSO support integrated in Seafile's webinterface Seahub.
Specifically, this is how SSO with the add-in works :
http(s)://SEAFILE_SERVER_URL/outlook/
http(s)://SEAFILE_SERVER_URL/accounts/login/
including a redirect request to /outlook/ following a successful authentication (e.g., https://demo.seafile.com/accounts/login/?next=/jwt-sso/?page=/outlook/
)This document explains how to configure Seafile and the reverse proxy and how to deploy the PHP script.
"},{"location":"outdate/outlook_addin_config/#requirements","title":"Requirements","text":"SSO authentication must be configured in Seafile.
Seafile Server must be version 8.0 or above.
"},{"location":"outdate/outlook_addin_config/#installing-prerequisites","title":"Installing prerequisites","text":"The packages php, composer, firebase-jwt, and guzzle must be installed. PHP can usually be downloaded and installed via the distribution's official repositories. firebase-jwt and guzzle are installed using composer.
First, install the php package and check the installed version:
# CentOS/RedHat\n$ sudo yum install -y php-fpm php-curl\n$ php --version\n\n# Debian/Ubuntu\n$ sudo apt install -y php-fpm php-curl\n$ php --version\n
Second, install composer. You find an up-to-date install manual at https://getcomposer.org/ for CentOS, Debian, and Ubuntu.
Third, use composer to install firebase-jwt and guzzle in a new directory in /var/www
:
$ mkdir -p /var/www/outlook-sso\n$ cd /var/www/outlook-sso\n$ composer require firebase/php-jwt guzzlehttp/guzzle\n
"},{"location":"outdate/outlook_addin_config/#configuring-seahub","title":"Configuring Seahub","text":"Add this block to the config file seahub_settings.py
using a text editor:
ENABLE_JWT_SSO = True\nJWT_SSO_SECRET_KEY = 'SHARED_SECRET'\nENABLE_SYS_ADMIN_GENERATE_USER_AUTH_TOKEN = True\n
Replace SHARED_SECRET with a secret of your own.
"},{"location":"outdate/outlook_addin_config/#configuring-the-proxy-server","title":"Configuring the proxy server","text":"The configuration depends on the proxy server use.
If you use nginx, add the following location block to the nginx configuration:
location /outlook {\n alias /var/www/outlook-sso/public;\n index index.php;\n location ~ \\.php$ {\n fastcgi_split_path_info ^(.+\\.php)(/.+)$;\n fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;\n fastcgi_param SCRIPT_FILENAME $request_filename;\n fastcgi_index index.php;\n include fastcgi_params;\n }\n}\n
This sample block assumes that PHP 7.4 is installed. If you have a different PHP version on your system, modify the version in the fastcgi_pass unix.
Note: The alias path can be altered. We advise against it unless there are good reasons. If you do, make sure you modify the path accordingly in all subsequent steps.
Finally, check the nginx configuration and restart nginx:
$ nginx -t\n$ nginx -s reload\n
"},{"location":"outdate/outlook_addin_config/#deploying-the-php-script","title":"Deploying the PHP script","text":"The PHP script and corresponding configuration files will be saved in the new directory created earlier. Change into it and add a PHP config file:
$ cd /var/www/outlook-sso\n$ nano config.php\n
Paste the following content in the config.php
:
<?php\n\n# general settings\n$seafile_url = 'SEAFILE_SERVER_URL';\n$jwt_shared_secret = 'SHARED_SECRET';\n\n# Option 1: provide credentials of a seafile admin user\n$seafile_admin_account = [\n 'username' => '',\n 'password' => '',\n];\n\n# Option 2: provide the api-token of a seafile admin user\n$seafile_admin_token = '';\n\n?>\n
First, replace SEAFILE_SERVER_URL with the URL of your Seafile Server and SHARED_SECRET with the key used in Configuring Seahub.
Second, add either the user credentials of a Seafile user with admin rights or the API-token of such a user.
In the next step, create the index.php
and copy & paste the PHP script:
mkdir /var/www/outlook-sso/public\n$ cd /var/www/outlook-sso/public\n$ nano index.php\n
Paste the following code block:
<?php\n/** IMPORTANT: there is no need to change anything in this file ! **/\n\nrequire_once __DIR__ . '/../vendor/autoload.php';\nrequire_once __DIR__ . '/../config.php';\n\nif(!empty($_GET['jwt-token'])){\n try {\n $decoded = Firebase\\JWT\\JWT::decode($_GET['jwt-token'], new Firebase\\JWT\\Key($jwt_shared_secret, 'HS256'));\n }\n catch (Exception $e){\n echo json_encode([\"error\" => \"wrong JWT-Token\"]);\n die();\n }\n\n try {\n // init connetion to seafile api\n $client = new GuzzleHttp\\Client(['base_uri' => $seafile_url]);\n\n // get admin api-token with his credentials (if not set)\n if(empty($seafile_admin_token)){\n $request = $client->request('POST', '/api2/auth-token/', ['form_params' => $seafile_admin_account]);\n $response = json_decode($request->getBody());\n $seafile_admin_token = $response->token;\n }\n\n // get api-token of the user\n $request = $client->request('POST', '/api/v2.1/admin/generate-user-auth-token/', [\n 'json' => ['email' => $decoded->email],\n 'headers' => ['Authorization' => 'Token '. $seafile_admin_token]\n ]);\n $response = json_decode($request->getBody());\n\n // create the output for the outlook plugin (json like response)\n echo json_encode([\n 'exp' => $decoded->exp,\n 'email' => $decoded->email,\n 'name' => $decoded->name,\n 'token' => $response->token,\n ]);\n } catch (GuzzleHttp\\Exception\\ClientException $e){\n echo $e->getResponse()->getBody();\n }\n}\nelse{ // no jwt-token. therefore redirect to the login page of seafile\n header(\"Location: \". $seafile_url .\"/accounts/login/?next=/jwt-sso/?page=/outlook\");\n} ?>\n
Note: Contrary to the config.php, no replacements or modifications are necessary in this file.
The directory layout in /var/www/sso-outlook/
should now look as follows:
$ tree -L 2 /var/www/outlook-sso\n/var/www/outlook-sso/\n\u251c\u2500\u2500 composer.json\n\u251c\u2500\u2500 composer.lock\n\u251c\u2500\u2500 config.php\n\u251c\u2500\u2500 public\n| \u2514\u2500\u2500 index.php\n\u2514\u2500\u2500 vendor\n \u251c\u2500\u2500 autoload.php\n \u251c\u2500\u2500 composer\n \u2514\u2500\u2500 firebase\n
Seafile and Seahub are now configured to support SSO in the Seafile Add-in for Outlook.
"},{"location":"outdate/outlook_addin_config/#testing","title":"Testing","text":"You can now test SSO authentication in the add-in. Hit the SSO button in the settings of the Seafile add-in.
"},{"location":"outdate/seaf_encrypt/","title":"Seafile Storage Encryption Backend","text":"This feature is deprecated. We recommend you to use the encryption feature provided the storage system.
Since Seafile Professional Server 5.1.3, we support storage enryption backend functionality. When enabled, all seafile objects (commit, fs, block) will be encrypted with AES 256 CBC algorithm, before writing them to the storage backend. Currently supported backends are: file system, Ceph, Swift and S3.
Note that all objects will be encrypted with the same global key/iv pair. The key/iv pair has to be generated by the system admin and stored safely. If the key/iv pair is lost, all data cannot be recovered.
"},{"location":"outdate/seaf_encrypt/#configure-storage-backend-encryption","title":"Configure Storage Backend Encryption","text":""},{"location":"outdate/seaf_encrypt/#generate-key-and-iv","title":"Generate Key and IV","text":"Go to /seafile-server-latest, execute ./seaf-gen-key.sh -h
. it will print the following usage information:
usage :\nseaf-gen-key.sh\n -p <file path to write key iv, default ./seaf-key.txt>\n
By default, the key/iv pair will be saved to a file named seaf-key.txt in the current directory. You can use '-p' option to change the path.
"},{"location":"outdate/seaf_encrypt/#configure-a-freshly-installed-seafile-server","title":"Configure a freshly installed Seafile Server","text":"Add the following configuration to seafile.conf:
[store_crypt]\nkey_path = <the key file path generated in previous section>\n
Now the encryption feature should be working.
"},{"location":"outdate/seaf_encrypt/#migrating-existing-seafile-server","title":"Migrating Existing Seafile Server","text":"If you have existing data in the Seafile server, you have to migrate/encrypt the existing data. You must stop Seafile server before migrating the data.
"},{"location":"outdate/seaf_encrypt/#create-directories-for-encrypted-data","title":"Create Directories for Encrypted Data","text":"Create new configuration and data directories for the encrypted data.
cd seafile-server-latest\ncp -r conf conf-enc\nmkdir seafile-data-enc\ncp -r seafile-data/library-template seafile-data-enc\n# If you use SQLite database\ncp seafile-data/seafile.db seafile-data-enc/\n
"},{"location":"outdate/seaf_encrypt/#edit-config-files","title":"Edit Config Files","text":"If you configured S3/Swift/Ceph backend, edit /conf-enc/seafile.conf. You must use a different bucket/container/pool to store the encrypted data.
Then add the following configuration to /conf-enc/seafile.conf
[store_crypt]\nkey_path = <the key file path generated in previous section>\n
"},{"location":"outdate/seaf_encrypt/#migrate-the-data","title":"Migrate the Data","text":"Go to /seafile-server-latest, use the seaf-encrypt.sh script to migrate the data.
Run ./seaf-encrypt.sh -f ../conf-enc -e ../seafile-data-enc
,
Starting seaf-encrypt, please wait ...\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 57 block among 12 repo.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 102 fs among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all fs.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 66 commit among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all commit.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all block.\nseaf-encrypt run done\nDone.\n
If there are error messages after executing seaf-encrypt.sh, you can fix the problem and run the script again. Objects that have already been migrated will not be copied again.
"},{"location":"outdate/seaf_encrypt/#clean-up","title":"Clean Up","text":"Go to , execute following commands:
mv conf conf-bak\nmv seafile-data seafile-data-bak\nmv conf-enc conf\nmv seafile-data-enc seafile-data\n
Restart Seafile Server. If everything works okay, you can remove the backup directories.
"},{"location":"outdate/terms_and_conditions/","title":"Terms and Conditions","text":"Starting from version 6.0, system admin can add T&C at admin panel, all users need to accept that before using the site.
In order to use this feature, please add following line to seahub_settings.py
,
ENABLE_TERMS_AND_CONDITIONS = True\n
After restarting, there will be \"Terms and Conditions\" section at sidebar of admin panel.
"},{"location":"outdate/using_fuse/","title":"Seafile","text":""},{"location":"outdate/using_fuse/#using-fuse","title":"Using Fuse","text":"Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse
is an implementation of the [http://fuse.sourceforge.net FUSE] virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Seaf-fuse is added since Seafile Server '''2.1.0'''.
'''Note:''' * Encrypted folders can't be accessed by seaf-fuse. * Currently the implementation is '''read-only''', which means you can't modify the files through the mounted folder. * One debian/centos systems, you need to be in the \"fuse\" group to have the permission to mount a FUSE folder.
"},{"location":"outdate/using_fuse/#how-to-start-seaf-fuse","title":"How to start seaf-fuse","text":"Assume we want to mount to /data/seafile-fuse
.
mkdir -p /data/seafile-fuse\n
"},{"location":"outdate/using_fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"'''Note:''' Before start seaf-fuse, you should have started seafile server with ./seafile.sh start
.
./seaf-fuse.sh start /data/seafile-fuse\n
"},{"location":"outdate/using_fuse/#stop-seaf-fuse","title":"Stop seaf-fuse","text":"./seaf-fuse.sh stop\n
"},{"location":"outdate/using_fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"outdate/using_fuse/#the-top-level-folder","title":"The top level folder","text":"Now you can list the content of /data/seafile-fuse
.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 test@test.com/\n
$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"outdate/using_fuse/#the-folder-for-a-library","title":"The folder for a library","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 1970 image.png\n-rw-r--r-- 1 root root 501K Jan 1 1970 sample.jpng\n
"},{"location":"outdate/using_fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start
, most likely you are not in the \"fuse group\". You should:
sudo usermod -a -G fuse <your-user-name>\n
./seaf-fuse.sh start <path>
again.You need to install ffmpeg package to let the video thumbnail work correctly:
Ubuntu 16.04
# Install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\n
Centos 7
# We need to activate the epel repos\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\n\n# Then update the repo and install ffmpeg\nyum -y install ffmpeg ffmpeg-devel\n\n# Now we need to install some modules\npip install pillow moviepy\n
Debian Jessie
# Add backports repo to /etc/apt/sources.list.d/\n# e.g. the following repo works (June 2017)\nsudo echo \"deb http://httpredir.debian.org/debian $(lsb_release -cs)-backports main non-free\" > /etc/apt/sources.list.d/debian-backports.list\n\n# Then update the repo and install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\n
"},{"location":"outdate/video_thumbnails/#configure-seafile-to-create-thumbnails","title":"Configure Seafile to create thumbnails","text":"Now configure accordingly in seahub_settings.py
# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first. \n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails/\n# NOTE: since version 6.1\nENABLE_VIDEO_THUMBNAIL = True\n\n# Use the frame at 5 second as thumbnail\nTHUMBNAIL_VIDEO_FRAME_TIME = 5 \n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n
"},{"location":"setup/caddy/","title":"HTTPS and Caddy","text":"Note
From Seafile Docker 12.0, HTTPS will be handled by the Caddy. The default caddy image used of Seafile docker is lucaslorentz/caddy-docker-proxy:2.9
.
Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. In addition to the advantages of traditional proxy components (e.g., nginx), Caddy also makes it easier for users to complete the acquisition and update of HTTPS certificates by providing simpler configurations.
To engage HTTPS, users only needs to correctly configure the following fields in .env
:
SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\n
"},{"location":"setup/cluster_deploy_with_docker/","title":"Seafile Docker Cluster Deployment","text":"Seafile Docker cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the Load Balancer Setting for details.
"},{"location":"setup/cluster_deploy_with_docker/#environment","title":"Environment","text":"System: Ubuntu 24.04
Seafile Server: 2 frontend nodes, 1 backend node
We assume you have already deployed memcache, MariaDB, ElasticSearch in separate machines and use S3 like object storage.
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"setup/cluster_deploy_with_docker/#deploy-the-first-seafile-frontend-node","title":"Deploy the first Seafile frontend node","text":"Create the mount directory
mkdir -p /opt/seafile/shared\n
Pulling Seafile image
docker pull seafileltd/seafile-pro-mc:12.0-latest\n
Note
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download.
Download the seafile-server.yml
and .env
wget -O .env https://manual.seafile.com/12.0/docker/cluster/env\nwget https://manual.seafile.com/12.0/docker/cluster/seafile-server.yml\n
Modify the variables in .env
(especially the terms like <...>
).
Tip
If you have already deployed S3 storage backend and plan to apply it to Seafile cluster, you can modify the variables in .env
to set them synchronously during initialization.
Start the Seafile docker
$ cd /opt/seafile\n$ docker compose up -d\n
Cluster init mode
Because CLUSTER_INIT_MODE is true in the .env
file, Seafile docker will be started in init mode and generate configuration files. As the results, you can see the following lines if you trace the Seafile container (i.e., docker logs seafile
):
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n
In initialization mode, the service will not be started. During this time you can check the generated configuration files (e.g., MySQL, Memcached, Elasticsearch) in configuration files:
After initailizing the cluster, the following fields can be removed in .env
CLUSTER_INIT_MODE
, must be removed from .env fileCLUSTER_INIT_MEMCACHED_HOST
CLUSTER_INIT_ES_HOST
CLUSTER_INIT_ES_PORT
INIT_S3_STORAGE_BACKEND_CONFIG
INIT_S3_COMMIT_BUCKET
INIT_S3_FS_BUCKET
INIT_S3_BLOCK_BUCKET
INIT_S3_KEY_ID
INIT_S3_SECRET_KEY
Tip
We recommend that you check that the relevant configuration files are correct and copy the SEAFILE_VOLUME
directory before the service is officially started, because only the configuration files are generated after initialization. You can directly migrate the entire copied SEAFILE_VOLUME
to other nodes later:
cp -r /opt/seafile/shared /opt/seafile/shared-bak\n
Restart the container to start the service in frontend node
docker compose down\ndocker compose up -d\n
Frontend node starts successfully
After executing the above command, you can trace the logs of container seafile
(i.e., docker logs seafile
). You can see the following message if the frontend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
You can directly copy all the directories generated by the first frontend node, including the Docker-compose files (e.g., seafile-server.yml
, .env
) and modified configuration files, and then start the seafile docker container:
docker compose down\ndocker compose up -d\n
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-backend-node","title":"Deploy seafile backend node","text":"Create the mount directory
$ mkdir -p /opt/seafile/shared\n
Pulling Seafile image
Copy seafile-server.yml
, .env
and configuration files from frontend node
Note
The configuration files from frontend node have to be put in the same path as the frontend node, i.e., /opt/seafile/shared/seafile/conf/*
Modify .env
, set CLUSTER_MODE
to backend
Start the service in the backend node
docker compose up -d\n
Backend node starts successfully
After executing the above command, you can trace the logs of container seafile
(i.e., docker logs seafile
). You can see the following message if the backend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:11:59 Nginx ready \n2024-11-21 03:11:59 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nSeafile server started\n\nDone.\n\nStarting seafile background tasks ...\nDone.\n
Execute the following commands on the two Seafile frontend servers:
$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n
Warning
Please correctly modify the IP address (Front-End01-IP
and Front-End02-IP
) of the frontend server in the above configuration file. Other wise it cannot work properly.
Choose one of the above two servers as the master node, and the other as the slave node.
Perform the following operations on the master node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Warning
Please correctly configure the virtual IP address and network interface device name in the above file. Other wise it cannot work properly.
Perform the following operations on the standby node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Finally, run the following commands on the two Seafile frontend servers to start the corresponding services:
$ systemctl enable --now haproxy\n$ systemctl enable --now keepalived\n
So far, Seafile cluster has been deployed.
"},{"location":"setup/cluster_deploy_with_docker/#optional-deploy-seadoc-server","title":"(Optional) Deploy SeaDoc server","text":"You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL
in your .env
file
This manual explains how to deploy and run Seafile Server on a Linux server using Kubernetes (k8s thereafter).
"},{"location":"setup/cluster_deploy_with_k8s/#gettings-started","title":"Gettings started","text":"The two volumes for persisting data, /opt/seafile-data
and /opt/seafile-mysql
, are still adopted in this manual. What's more, all k8s YAML files will be placed in /opt/seafile-k8s-yaml
. It is not recommended to change these paths. If you do, account for it when following these instructions.
The two tools, kubectl and a k8s control plane tool (i.e., kubeadm), are required and can be installed with official installation guide.
Multi-node deployment
If it is a multi-node deployment, k8s control plane needs to be installed on each node. After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster. Since this manual still uses the same image as docker deployment, we need to add the following repository to k8s:
kubectl create secret docker-registry regcred --docker-server=seafileltd --docker-username=seafile --docker-password=zjkmid6rQibdZ=uJMuWS\n
"},{"location":"setup/cluster_deploy_with_k8s/#yaml","title":"YAML","text":"Seafile mainly involves three different services, namely database service, cache service and seafile service. Since these three services do not have a direct dependency relationship, we need to separate them from the entire docker-compose.yml (in this manual, we use Seafile 12 PRO) and divide them into three pods. For each pod, we need to define a series of YAML files for k8s to read, and we will store these YAMLs in /opt/seafile-k8s-yaml
.
Note
This series of YAML mainly includes Deployment for pod management and creation, Service for exposing services to the external network, PersistentVolume for defining the location of a volume used for persistent storage on the host and Persistentvolumeclaim for declaring the use of persistent storage in the container. For futher configuration details, you can refer the official documents.
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb","title":"mariadb","text":""},{"location":"setup/cluster_deploy_with_k8s/#mariadb-deploymentyaml","title":"mariadb-deployment.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mariadb\nspec:\n selector:\n matchLabels:\n app: mariadb\n replicas: 1\n template:\n metadata:\n labels:\n app: mariadb\n spec:\n containers:\n - name: mariadb\n image: mariadb:10.11\n env:\n - name: MARIADB_ROOT_PASSWORD\n value: \"db_password\"\n - name: MARIADB_AUTO_UPGRADE\n value: \"true\"\n ports:\n - containerPort: 3306\n volumeMounts:\n - name: mariadb-data\n mountPath: /var/lib/mysql\n volumes:\n - name: mariadb-data\n persistentVolumeClaim:\n claimName: mariadb-data\n
Please replease MARIADB_ROOT_PASSWORD
to your own mariadb password.
Tip
In the above Deployment configuration file, no restart policy for the pod is specified. The default restart policy is Always. If you need to modify it, add the following to the spec attribute:
restartPolicy: OnFailure\n\n#Note:\n# Always: always restart (include normal exit)\n# OnFailure: restart only with unexpected exit\n# Never: do not restart\n
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb-serviceyaml","title":"mariadb-service.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: mariadb\nspec:\n selector:\n app: mariadb\n ports:\n - protocol: TCP\n port: 3306\n targetPort: 3306\n
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb-persistentvolumeyaml","title":"mariadb-persistentvolume.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mariadb-data\nspec:\n capacity:\n storage: 1Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-mysql/db\n
"},{"location":"setup/cluster_deploy_with_k8s/#mariadb-persistentvolumeclaimyaml","title":"mariadb-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mariadb-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"setup/cluster_deploy_with_k8s/#memcached","title":"memcached","text":""},{"location":"setup/cluster_deploy_with_k8s/#memcached-deploymentyaml","title":"memcached-deployment.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: memcached\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: memcached\n template:\n metadata:\n labels:\n app: memcached\n spec:\n containers:\n - name: memcached\n image: memcached:1.6.18\n args: [\"-m\", \"256\"]\n ports:\n - containerPort: 11211\n
"},{"location":"setup/cluster_deploy_with_k8s/#memcached-serviceyaml","title":"memcached-service.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: memcached\nspec:\n selector:\n app: memcached\n ports:\n - protocol: TCP\n port: 11211\n targetPort: 11211\n
"},{"location":"setup/cluster_deploy_with_k8s/#seafile","title":"Seafile","text":""},{"location":"setup/cluster_deploy_with_k8s/#seafile-deploymentyaml","title":"seafile-deployment.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: seafile\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: seafile\n template:\n metadata:\n labels:\n app: seafile\n spec:\n containers:\n - name: seafile\n # image: seafileltd/seafile-mc:9.0.10\n # image: seafileltd/seafile-mc:11.0-latest\n image: seafileltd/seafile-pro-mc:12.0-latest\n env:\n - name: DB_HOST\n value: \"mariadb\"\n - name: DB_ROOT_PASSWD\n value: \"db_password\" #db's password\n - name: TIME_ZONE\n value: \"Europe/Berlin\"\n - name: INIT_SEAFILE_ADMIN_EMAIL\n value: \"admin@seafile.com\" #admin email\n - name: INIT_SEAFILE_ADMIN_PASSWORD\n value: \"admin_password\" #admin password\n - name: SEAFILE_SERVER_LETSENCRYPT\n value: \"false\"\n - name: SEAFILE_SERVER_HOSTNAME\n value: \"you_seafile_domain\" #hostname\n ports:\n - containerPort: 80\n # - containerPort: 443\n # name: seafile-secure\n volumeMounts:\n - name: seafile-data\n mountPath: /shared\n volumes:\n - name: seafile-data\n persistentVolumeClaim:\n claimName: seafile-data\n restartPolicy: Always\n # to get image from protected repository\n imagePullSecrets:\n - name: regcred\n
Please replease the above configurations, such as database root password, admin in seafile."},{"location":"setup/cluster_deploy_with_k8s/#seafile-serviceyaml","title":"seafile-service.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: seafile\nspec:\n selector:\n app: seafile\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80\n nodePort: 30000\n
"},{"location":"setup/cluster_deploy_with_k8s/#seafile-persistentvolumeyaml","title":"seafile-persistentvolume.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: seafile-data\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-data\n
"},{"location":"setup/cluster_deploy_with_k8s/#seafile-persistentvolumeclaimyaml","title":"seafile-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: seafile-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"setup/cluster_deploy_with_k8s/#deploy-pods","title":"Deploy pods","text":"You can use following command to deploy pods:
kubectl apply -f /opt/seafile-k8s-yaml/\n
"},{"location":"setup/cluster_deploy_with_k8s/#container-management","title":"Container management","text":"Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with seafile-
as the prefix (such as seafile-748b695648-d6l4g)
kubectl get pods\n
You can check a status of a pod by
kubectl logs seafile-748b695648-d6l4g\n
and enter a container by
kubectl exec -it seafile-748b695648-d6l4g -- bash\n
If you modify some configurations in /opt/seafile-data/conf
and need to restart the container, the following command can be refered:
kubectl delete deployments --all\nkubectl apply -f /opt/seafile-k8s-yaml/\n
"},{"location":"setup/migrate_backends_data/","title":"Migrate data between different backends","text":"Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss.
Data migration takes 3 steps:
We need to add new backend configurations to this file (including [block_backend]
, [commit_object_backend]
, [fs_object_backend]
options) and save it under a readable path. Let's assume that we are migrating data to S3 and create temporary seafile.conf under /opt
cat > seafile.conf << EOF\n[commit_object_backend]\nname = s3\nbucket = seacomm\nkey_id = ******\nkey = ******\n\n[fs_object_backend]\nname = s3\nbucket = seafs\nkey_id = ******\nkey = ******\n\n[block_backend]\nname = s3\nbucket = seablk\nkey_id = ******\nkey = ******\nEOF\n\nmv seafile.conf /opt\n
If you want to migrate to a local file system, the seafile.conf temporary configuration example is as follows:
cat > seafile.conf << EOF\n[commit_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[fs_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[block_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\nEOF\n\nmv seafile.conf /opt\n
Repalce the configurations with your own choice.
"},{"location":"setup/migrate_backends_data/#migrating-to-sse-c-encrypted-s3-storage","title":"Migrating to SSE-C Encrypted S3 Storage","text":"If you are migrating to S3 storage, and want your data to be encrypted at rest, you can configure SSE-C encryption options in the temporary seafile.conf. Note that you have to use Seafile Pro 11 or newer and make sure your S3 storage supports SSE-C.
cat > seafile.conf << EOF\n[commit_object_backend]\nname = s3\nbucket = seacomm\nkey_id = ******\nkey = ******\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\nbucket = seafs\nkey_id = ******\nkey = ******\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\nbucket = seablk\nkey_id = ******\nkey = ******\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\nEOF\n\nmv seafile.conf /opt\n
sse_c_key
is a string of 32 characters.
You can generate sse_c_key
with the following command\uff1a
openssl rand -base64 24\n
"},{"location":"setup/migrate_backends_data/#migrating-large-number-of-objects","title":"Migrating large number of objects","text":"If you have millions of objects in the storage (especially fs objects), it may take quite long time to migrate all objects. More than half of the time is spent on checking whether an object exists in the destination storage. Since Pro edition 7.0.8, a feature is added to speed-up the checking.
Before running the migration script, please set this env variable:
export OBJECT_LIST_FILE_PATH=/path/to/object/list/file\n
3 files will be created: /path/to/object/list/file.commit
,/path/to/object/list/file.fs
, /path/to/object/list/file.blocks
.
When you run the script for the first time, the object list file will be filled with existing objects in the destination. Then, when you run the script for the second time, it will load the existing object list from the file, instead of querying the destination. And newly migrated objects will also be added to the file. During migration, the migration process checks whether an object exists by checking the pre-loaded object list, instead of asking the destination, which will greatly speed-up the migration process.
It's suggested that you don't interrupt the script during the \"fetch object list\" stage when you run it for the first time. Otherwise the object list in the file will be incomplete.
Another trick to speed-up the migration is to increase the number of worker threads and size of task queue in the migration script. You can modify the nworker
and maxsize
variables in the following code:
class ThreadPool(object):\n\ndef __init__(self, do_work, nworker=20):\n self.do_work = do_work\n self.nworker = nworker\n self.task_queue = Queue.Queue(maxsize = 2000)\n
The number of workers can be set to relatively large values, since they're mostly waiting for I/O operations to finished.
"},{"location":"setup/migrate_backends_data/#decrypting-encrypted-storage-backend","title":"Decrypting encrypted storage backend","text":"If you have an encrypted storage backend (a deprecated feature no long supported now), you can use this script to migrate and decrypt the data from that backend to a new one. You can add the --decrypt
option, which will decrypt the data while reading it, and then write the unencrypted data to the new backend. Note that you need add this option in all stages of the migration.
cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt --decrypt\n
"},{"location":"setup/migrate_backends_data/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step.
We assume you have installed seafile pro server under ~/haiwen
, enter ~/haiwen/seafile-server-latest
and run migrate.sh with parent path of temporary seafile.conf as parameter, here is /opt
.
cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\n
Tip
This script is completely reentrant. So you can stop and restart it, or run it many times. It will check whether an object exists in the destination before sending it.
"},{"location":"setup/migrate_backends_data/#run-final-migration","title":"Run final migration","text":"New objects added during the last migration step will be migrated in this step. To prevent new objects being added, you have to stop Seafile service during the final migration operation. This usually take short time. If you have large number of objects, please following the optimization instruction in previous section.
You just have to stop Seafile and Seahub service, then run the migration script again.
cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\n
"},{"location":"setup/migrate_backends_data/#replace-the-original-seafileconf","title":"Replace the original seafile.conf","text":"After running the script, we need replace the original seafile.conf with new one:
mv /opt/seafile.conf ~/haiwen/conf\n
now we only have configurations about backend, more config options, e.g. memcache and quota, can then be copied from the original seafile.conf file.
After replacing seafile.conf, you can restart seafile server and access the data on the new backend.
"},{"location":"setup/migrate_ce_to_pro_with_docker/","title":"Migrate CE to Pro with Docker","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#preparation","title":"Preparation","text":".env
and seafile-server.yml
of Seafile Pro.wget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\n
"},{"location":"setup/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"docker compose down\n
Tip
To ensure data security, it is recommended that you back up your MySQL data
"},{"location":"setup/migrate_ce_to_pro_with_docker/#put-your-licence-file","title":"Put your licence file","text":"Copy the seafile-license.txt
to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data
, so you should put it in the /opt/seafile-data/seafile/
.
Modify .env
based on the old configurations from the old .env
file. The following fields should be paid special attention and others should be the same as the old configurations:
SEAFILE_IMAGE
The Seafile pro docker image, which the tag must be equal to or newer than the old Seafile CE docker tag seafileltd/seafile-pro-mc:12.0-latest
SEAFILE_ELASTICSEARCH_VOLUME
The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
For other fileds (e.g., SEAFILE_VOLUME
, SEAFILE_MYSQL_VOLUME
, SEAFILE_MYSQL_DB_USER
, SEAFILE_MYSQL_DB_PASSWORD
), must be consistent with the old configurations.
Tip
For the configurations using to do the initializations (e.g, INIT_SEAFILE_ADMIN_EMAIL
, INIT_SEAFILE_MYSQL_ROOT_PASSWORD
, INIT_S3_STORAGE_BACKEND_CONFIG
), you can remove it from .env
as well
Replace the old seafile-server.yml
and .env
by the new and modified files, i.e. (if your old seafile-server.yml
and .env
are in the /opt
)
mv -b seafile-server.yml /opt/seafile-server.yml\nmv -b .env /opt/.env\n
"},{"location":"setup/migrate_ce_to_pro_with_docker/#do-the-migration","title":"Do the migration","text":"The Seafile Pro container needs to be running during the migration process, which means that end users may access the Seafile service during this process. In order to avoid the data confusion caused by this, it is recommended that you take the necessary measures to temporarily prohibit users from accessing the Seafile service. For example, modify the firewall policy.
Run the following command to run the Seafile-Pro container\uff1a
docker compose up -d\n
Then run the migration script by executing the following command:
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py setup --migrate\n
After the migration script runs successfully, modify es_host, es_port
in /opt/seafile-data/seafile/conf/seafevents.conf
manually.
[INDEX FILES]\nes_host = elasticsearch\nes_port = 9200\nenabled = true\ninterval = 10m\n
Restart the Seafile Pro container.
docker restart seafile\n
Now you have a Seafile Professional service.
"},{"location":"setup/migrate_non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"The recommended steps to migrate from non-docker deployment to docker deployment are:
The following document assumes that the deployment path of your non-Docker version of Seafile is /opt/seafile. If you use other paths, before running the command, be careful to modify the command path.
Note
You can also refer to the Seafile backup and recovery documentation, deploy Seafile Docker on another machine, and then copy the old configuration information, database, and seafile-data to the new machine to complete the migration. The advantage of this is that even if an error occurs during the migration process, the existing system will not be destroyed.
"},{"location":"setup/migrate_non_docker_to_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_non_docker_to_docker/#stop-seafile-nginx","title":"Stop Seafile, Nginx","text":"Stop the locally deployed Seafile, Nginx, Memcache
systemctl stop nginx && systemctl disable nginx\nsystemctl stop memcached && systemctl disable memcached\n./seafile.sh stop && ./seahub.sh stop\n
"},{"location":"setup/migrate_non_docker_to_docker/#prepare-mysql-and-the-folders-for-seafile-docker","title":"Prepare MySQL and the folders for Seafile docker","text":""},{"location":"setup/migrate_non_docker_to_docker/#add-permissions-to-the-local-mysql-seafile-user","title":"Add permissions to the local MySQL Seafile user","text":"The non-Docker version uses the local MySQL. Now if the Docker version of Seafile connects to this MySQL, you need to increase the corresponding access permissions.
The following commands are based on that you use seafile
as the user to access:
## Note, change the password according to the actual password you use\nGRANT ALL PRIVILEGES ON *.* TO 'seafile'@'%' IDENTIFIED BY 'your-password' WITH GRANT OPTION;\n\n## Grant seafile user can connect the database from any IP address\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seafile_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seahub_db`.* to 'seafile'@'%';\n\n## Restart MySQL\nsystemctl restart mariadb\n
"},{"location":"setup/migrate_non_docker_to_docker/#create-the-required-directories-for-seafile-docker-image","title":"Create the required directories for Seafile Docker image","text":"By default, we take /opt/seafile-data
as example.
mkdir -p /opt/seafile-data/seafile\n
"},{"location":"setup/migrate_non_docker_to_docker/#prepare-config-files","title":"Prepare config files","text":"Copy the original config files to the directory to be mapped by the docker version of seafile
cp -r /opt/seafile/conf /opt/seafile-data/seafile\ncp -r /opt/seafile/seahub-data /opt/seafile-data/seafile\n
Modify the MySQL configuration in /opt/seafile-data/seafile/conf
, including ccnet.conf
, seafile.conf
, seahub_settings
, change HOST=127.0.0.1
to HOST=<local ip>
.
Modify the memcached configuration in seahub_settings.py
to use the Docker version of Memcached: change it to 'LOCATION': 'memcached:11211'
(the network name of Docker version of Memcached is memcached
).
We recommond you download Seafile-docker yml into /opt/seafile-data
mkdir -p /opt/seafile-data\ncd /opt/seafile-data\n# e.g., pro edition\nwget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n\nnano .env\n
After downloading relative configuration files, you should also modify the .env
by following steps
Follow here to setup the database user infomations.
Mount the old Seafile data to the new Seafile server
SEAFILE_VOLUME=<old-Seafile-data>\n
"},{"location":"setup/migrate_non_docker_to_docker/#start-seafile-docker","title":"Start Seafile docker","text":"Start Seafile docker and check if everything is okay:
cd /opt/seafile-data\ndocker compose up -d\n
"},{"location":"setup/migrate_non_docker_to_docker/#security","title":"Security","text":"While it is not possible from inside a docker container to connect to the host database via localhost but via <local ip>
you also need to bind your databaseserver to that IP. If this IP is public, it is strongly advised to protect your database port with a firewall. Otherwise your databases are reachable via internet. An alternative might be to start another local IP from RFC 1597 e.g. 192.168.123.45
. Afterwards you can bind to that IP.
Following iptables commands protect MariaDB/MySQL:
iptables -A INPUT -s 172.16.0.0/12 -j ACCEPT #Allow Dockernetworks\niptables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\nip6tables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\n
Keep in mind this is not bootsafe!"},{"location":"setup/migrate_non_docker_to_docker/#binding-based","title":"Binding based","text":"For Debian based Linux Distros you can start a local IP by adding in /etc/network/interfaces
something like:
iface eth0 inet static\n address 192.168.123.45/32\n
eth0
might be ensXY
. Or if you know how to start a dummy interface, thats even better. SUSE based is by editing /etc/sysconfig/network/ifcfg-eth0
(ethXY/ensXY/bondXY)
If using MariaDB the server just can bind to one IP-address (192.158.1.38 or 0.0.0.0 (internet)). So if you bind your MariaDB server to that new address other applications might need some reconfigurement.
In /etc/mysql/mariadb.conf.d/50-server.cnf
edit the following line to:
bind-address = 192.168.123.45\n
then edit /opt/seafile-data/seafile/conf/ -> seafile.conf seahub_settings.py in the Host-Line to that IP and execute the following commands: service networking reload\nip a #to check whether the ip is present\nservice mysql restart\nss -tulpen | grep 3306 #to check whether the database listens on the correct IP\ncd /opt/seafile-data/\ndocker compose down\ndocker compose up -d\n\n## restart your applications\n
"},{"location":"setup/overview/","title":"Seafile Docker overview","text":"Seafile docker based installation consist of the following components (docker images):
SSL
configurationSeafile version 11.0 or later is required to work with SeaDoc
"},{"location":"setup/run_seafile_as_non_root_user_inside_docker/","title":"Run Seafile as non root user inside docker","text":"You can use run seafile as non root user in docker.
First add the NON_ROOT=true
to the .env
.
NON_ROOT=true\n
Then modify /opt/seafile-data/seafile/
permissions.
chmod -R a+rwx /opt/seafile-data/seafile/\n
Then destroy the containers and run them again:
docker compose down\ndocker compose up -d\n
Now you can run Seafile as seafile
user.
Tip
When doing maintenance, other scripts in docker are also required to be run as seafile
user, e.g. su seafile -c ./seaf-gc.sh
You can use one of the following methods to start Seafile container on system bootup.
"},{"location":"setup/seafile_docker_autostart/#modify-docker-composeservice","title":"Modify docker-compose.service","text":"Add docker-compose.service
vim /etc/systemd/system/docker-compose.service
[Unit]\nDescription=Docker Compose Application Service\nRequires=docker.service\nAfter=docker.service\n\n[Service]\nType=forking\nRemainAfterExit=yes\nWorkingDirectory=/opt/ \nExecStart=/usr/bin/docker compose up -d\nExecStop=/usr/bin/docker compose down\nTimeoutStartSec=0\n\n[Install]\nWantedBy=multi-user.target\n
Note
WorkingDirectory
is the absolute path to the seafile-server.yml
file directory.
Set the docker-compose.service
file to 644 permissions
chmod 644 /etc/systemd/system/docker-compose.service\n
Load autostart configuration
systemctl daemon-reload\nsystemctl enable docker-compose.service\n
Add configuration restart: unless-stopped
for each container in components of Seafile docker. Take seafile-server.yml
for example
services:\ndb:\n image: mariadb:10.11\n container_name: seafile-mysql-1\n restart: unless-stopped\n\nmemcached:\n image: memcached:1.6.18\n container_name: seafile-memcached\n restart: unless-stopped\n\nelasticsearch:\n image: elasticsearch:8.6.2\n container_name: seafile-elasticsearch\n restart: unless-stopped\n\nseafile:\n image: seafileltd/seafile-pro-mc:12.0-latest\n container_name: seafile\n restart: unless-stopped\n
Tip
Add restart: unless-stopped
, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
Seafile Community Edition requires a minimum of 2 cores and 2GB RAM.
"},{"location":"setup/setup_ce_by_docker/#getting-started","title":"Getting started","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile
is the directory for store Seafile docker compose files. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly./opt/seafile-mysql
and /opt/seafile-data
, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_ce_by_docker/#download-and-modify-env","title":"Download and modify.env
","text":"From Seafile Docker 12.0, we use .env
, seafile-server.yml
and caddy.yml
files for configuration
mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile CE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/ce/env\nwget https://manual.seafile.com/12.0/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n\nnano .env\n
The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
The root
password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME
The database name of ccnet ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
The database name of seafile seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
The database name of seahub seahub_db
JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
INIT_SEAFILE_ADMIN_EMAIL
Admin username me@example.com
(Recommend modifications) INIT_SEAFILE_ADMIN_PASSWORD
Admin password asecret
(Recommend modifications)"},{"location":"setup/setup_ce_by_docker/#start-seafile-server","title":"Start Seafile server","text":"Start Seafile server with the following command
docker compose up -d\n
Note
You must run the above command in the directory with the .env
. If .env
file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile
(i.e., docker logs seafile -f
)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com
to use Seafile.
/opt/seafile-data
","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log
./var/log
inside the container. /opt/seafile-data/logs/var-log/nginx
contains the logs of Nginx in the Seafile container.Tip
From Seafile Docker 12.0, we use the Caddy to do web service proxy. If you would like to access the logs of Caddy, you can use following command:
docker logs seafile-caddy --follow\n
"},{"location":"setup/setup_ce_by_docker/#find-logs","title":"Find logs","text":"To monitor container logs (from outside of the container), please use the following commands:
# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs seafile --follow\n
The Seafile logs are under /shared/logs/seafile
in the docker, or /opt/seafile-data/logs/seafile
in the server that run the docker.
The system logs are under /shared/logs/var-log
, or /opt/seafile-data/logs/var-log
in the server that run the docker.
To monitor all Seafile logs simultaneously (from outside of the container), run
sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n
"},{"location":"setup/setup_ce_by_docker/#more-configuration-options","title":"More configuration options","text":"The config files are under /opt/seafile-data/seafile/conf
. You can modify the configurations according to configuration section
Ensure the container is running, then enter this command:
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n
Enter the username and password according to the prompts. You now have a new admin account.
"},{"location":"setup/setup_ce_by_docker/#backup-and-recovery","title":"Backup and recovery","text":"Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_ce_by_docker/#garbage-collection","title":"Garbage collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_ce_by_docker/#faq","title":"FAQ","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n
The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f
.
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions.
"},{"location":"setup/setup_pro_by_docker/#requirements","title":"Requirements","text":"Seafile PE requires a minimum of 2 cores and 2GB RAM.
Other requirements for Seafile PE
If Elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM, and make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details):
sysctl -w vm.max_map_count=262144 #run as root\n
or modify /etc/sysctl.conf and reboot to set this value permanently:
nano /etc/sysctl.conf\n\n# modify vm.max_map_count\nvm.max_map_count=262144\n
About license
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. For futher details, please refer the license page of Seafile PE.
"},{"location":"setup/setup_pro_by_docker/#setup","title":"Setup","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile
is the directory of Seafile for storing Seafile docker files. If you decide to put Seafile in a different directory, adjust all paths accordingly.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_pro_by_docker/#downloading-the-seafile-image","title":"Downloading the Seafile Image","text":"docker pull seafileltd/seafile-pro-mc:12.0-latest\n
Note
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download.
Note
Older Seafile PE versions are available private docker repository (back to Seafile 7.0). You can get the username and password on the download page in the Customer Center.
"},{"location":"setup/setup_pro_by_docker/#downloading-and-modifying-env","title":"Downloading and Modifying.env
","text":"From Seafile Docker 12.0, we use .env
, seafile-server.yml
and caddy.yml
files for configuration.
mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile PE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n\nnano .env\n
The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
SEAFILE_ELASTICSEARCH_VOLUME
The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
The root
password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME
The database name of ccnet ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
The database name of seafile seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
The database name of seahub seahub_db
JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
INIT_SEAFILE_ADMIN_EMAIL
Synchronously set admin username during initialization me@example.com INIT_SEAFILE_ADMIN_PASSWORD
Synchronously set admin password during initialization asecret INIT_S3_STORAGE_BACKEND_CONFIG
Whether to configure S3 storage backend synchronously during initialization (i.e., the following variables with prefix INIT_S3_*
, for more details, please refer to AWS S3) false INIT_S3_COMMIT_BUCKET
S3 storage backend commit objects bucket (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_FS_BUCKET
S3 storage backend fs objects bucket (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_BLOCK_BUCKET
S3 storage backend block objects bucket (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_KEY_ID
S3 storage backend key ID (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_SECRET_KEY
S3 storage backend secret key (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) (required when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) INIT_S3_USE_V4_SIGNATURE
Use the v4 protocol of S3 if enabled (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) true
INIT_S3_AWS_REGION
Region of your buckets (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
and INIT_S3_USE_V4_SIGNATURE
sets to true
) us-east-1
INIT_S3_HOST
Host of your buckets (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
and INIT_S3_USE_V4_SIGNATURE
sets to true
) s3.us-east-1.amazonaws.com
INIT_S3_USE_HTTPS
Use HTTPS connections to S3 if enabled (only valid when INIT_S3_STORAGE_BACKEND_CONFIG
sets to true
) true
To conclude, set the directory permissions of the Elasticsearch volumne:
mkdir -p /opt/seafile-elasticsearch/data\nchmod 777 -R /opt/seafile-elasticsearch/data\n
"},{"location":"setup/setup_pro_by_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"Run docker compose in detached mode:
docker compose up -d\n
Note
You must run the above command in the directory with the .env
. If .env
file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile
(i.e., docker logs seafile -f
)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com
to use Seafile.
A 502 Bad Gateway error means that the system has not yet completed the initialization
"},{"location":"setup/setup_pro_by_docker/#find-logs","title":"Find logs","text":"To view Seafile docker logs, please use the following command
docker compose logs -f\n
The Seafile logs are under /shared/logs/seafile
in the docker, or /opt/seafile-data/logs/seafile
in the server that run the docker.
The system logs are under /shared/logs/var-log
, or /opt/seafile-data/logs/var-log
in the server that run the docker.
If you have a seafile-license.txt
license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data
. If you have modified the path, save the license file under your custom path.
Then restart Seafile:
docker compose down\n\ndocker compose up -d\n
"},{"location":"setup/setup_pro_by_docker/#seafile-directory-structure","title":"Seafile directory structure","text":""},{"location":"setup/setup_pro_by_docker/#path-optseafile-data","title":"Path /opt/seafile-data
","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log
./var/log
inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/
.The command docker container list
should list the containers specified in the .env
.
The directory layout of the Seafile container's volume should look as follows:
$ tree /opt/seafile-data -L 2\n/opt/seafile-data\n\u251c\u2500\u2500 logs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 var-log\n\u251c\u2500\u2500 nginx\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 conf\n\u2514\u2500\u2500 seafile\n \u00a0\u00a0 \u251c\u2500\u2500 ccnet\n \u00a0\u00a0 \u251c\u2500\u2500 conf\n \u00a0\u00a0 \u251c\u2500\u2500 logs\n \u00a0\u00a0 \u251c\u2500\u2500 pro-data\n \u00a0\u00a0 \u251c\u2500\u2500 seafile-data\n \u00a0\u00a0 \u2514\u2500\u2500 seahub-data\n
All Seafile config files are stored in /opt/seafile-data/seafile/conf
. The nginx config file is in /opt/seafile-data/nginx/conf
.
Any modification of a configuration file requires a restart of Seafile to take effect:
docker compose restart\n
All Seafile log files are stored in /opt/seafile-data/seafile/logs
whereas all other log files are in /opt/seafile-data/logs/var-log
.
Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_pro_by_docker/#garbage-collection","title":"Garbage Collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_pro_by_docker/#faq","title":"FAQ","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n
The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data, /opt/seafile-elasticsearch, and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f
.
The entire db
service needs to be removed (or noted) in seafile-server.yml
if you would like to use an existing MySQL server, otherwise there is a redundant database service is running
service:\n\n # note or remove the entire `db` service\n #db:\n #image: ${SEAFILE_DB_IMAGE:-mariadb:10.11}\n #container_name: seafile-mysql\n # ... other parts in service `db`\n\n # do not change other services\n...\n
What's more, you have to modify the .env
to set correctly the fields with MySQL:
SEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\n
Tip
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
is needed during installation (i.e., the deployment in the first time). After Seafile is installed, the user seafile
will be used to connect to the MySQL server (SEAFILE_MYSQL_DB_PASSWORD), then you can remove the INIT_SEAFILE_MYSQL_ROOT_PASSWORD
.
Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS.
"},{"location":"setup/setup_with_ceph/#copy-ceph-conf-file-and-client-keyring","title":"Copy ceph conf file and client keyring","text":"Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine.
seafile-machine# sudo scp user@ceph-admin-node:/etc/ceph/ /etc\n
"},{"location":"setup/setup_with_ceph/#install-and-enable-memcached","title":"Install and enable memcached","text":"For best performance, Seafile requires install memcached or redis and enable cache for objects.
We recommend to allocate at least 128MB memory for object cache.
"},{"location":"setup/setup_with_ceph/#install-python-ceph-library","title":"Install Python Ceph Library","text":"File search and WebDAV functions rely on Python Ceph library installed in the system.
sudo apt-get install python3-rados\n
"},{"location":"setup/setup_with_ceph/#edit-seafile-configuration","title":"Edit seafile configuration","text":"Edit seafile.conf
, add the following lines:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-fs\n
You also need to add memory cache configurations
It's required to create separate pools for commit, fs, and block objects.
ceph-admin-node# rados mkpool seafile-blocks\nceph-admin-node# rados mkpool seafile-commits\nceph-admin-node# rados mkpool seafile-fs\n
Troubleshooting librados incompatibility issues
Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS.
To do this, you have to remove a few bundled libraries:
cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\n
"},{"location":"setup/setup_with_ceph/#use-arbitary-ceph-user","title":"Use arbitary Ceph user","text":"The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a ceph_client_id
option to seafile.conf, as the following:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\n
You can create a ceph user for seafile on your ceph cluster like this:
ceph auth add client.seafile \\\n mds 'allow' \\\n mon 'allow r' \\\n osd 'allow rwx pool=seafile-blocks, allow rwx pool=seafile-commits, allow rwx pool=seafile-fs'\n
You also have to add this user's keyring path to /etc/ceph/ceph.conf:
[client.seafile]\nkeyring = <path to user's keyring file>\n
"},{"location":"setup/setup_with_multiple_storage_backends/","title":"Multiple Storage Backend","text":"There are some use cases that supporting multiple storage backends in Seafile server is needed. Such as:
The library data in Seafile server are spreaded into multiple storage backends in the unit of libraries. All the data in a library will be located in the same storage backend. The mapping from library to its storage backend is stored in a database table. Different mapping policies can be chosen based on the use case.
To use this feature, you need to:
In Seafile server, a storage backend is represented by the concept of \"storage class\". A storage class is defined by specifying the following information:
storage_id
: an internal string ID to identify the storage class. It's not visible to users. For example \"primary storage\".name
: A user visible name for the storage class.is_default
: whether this storage class is the default. This option are effective in two cases:commits
\uff1athe storage for storing the commit objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.fs
\uff1athe storage for storing the fs objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.blocks
\uff1athe storage for storing the block objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes.
"},{"location":"setup/setup_with_multiple_storage_backends/#seafile-configuration","title":"Seafile Configuration","text":"As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before.
First, you have to enable this feature in seafile.conf.
[storage]\nenable_storage_classes = true\nstorage_classes_file = /opt/seafile_storage_classes.json\n
You also need to add memory cache configurations to seafile.conf
If installing Seafile as Docker containers, place the seafile_storage_classes.json
file on your local disk in a sub-directory of the location that is mounted to the seafile
container, and set the storage_classes_file
configuration above to a path relative to the /shared/
directory mounted on the seafile
container.
For example, if the configuration of the seafile
container in your docker-compose.yml
file is similar to the following:
// docker-compose.yml\nservices:\n seafile:\n container_name: seafile\n volumes:\n - /opt/seafile-data:/shared\n
Then place the JSON file within any sub-directory of /opt/seafile-data
(such as /opt/seafile-data/conf/
) and then configure seafile.conf
like so:
[storage]\nenable_storage_classes = true\nstorage_classes_file = /shared/conf/seafile_storage_classes.json\n
You also need to add memory cache configurations to seafile.conf
The JSON file is an array of objects. Each object defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class. Below is an example:
[\n {\n \"storage_id\": \"hot_storage\",\n \"name\": \"Hot Storage\",\n \"is_default\": true,\n \"commits\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-commits\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"fs\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-fs\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"blocks\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-blocks\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n }\n },\n {\n \"storage_id\": \"cold_storage\",\n \"name\": \"Cold Storage\",\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"commits\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seaflle-data\"\n }\n },\n {\n \"storage_id\": \"swift_storage\",\n \"name\": \"Swift Storage\",\n \"fs\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"commits\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-fs\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"blocks\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-blocks\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\",\n \"region\": \"RegionTwo\"\n }\n },\n {\n \"storage_id\": \"ceph_storage\",\n \"name\": \"ceph Storage\",\n \"fs\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-fs\"\n },\n \"commits\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-commits\"\n },\n \"blocks\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-blocks\"\n }\n }\n]\n
As you may have seen, the commits
, fs
and blocks
information syntax is similar to what is used in [commit_object_backend]
, [fs_object_backend]
and [block_backend]
section of seafile.conf. Refer to the detailed syntax in the documentation for the storage you use. For exampe, if you use S3 storage, refer to S3 Storage.
If you use file system as storage for fs
, commits
or blocks
, you must explicitly provide the path for the seafile-data
directory. The objects will be stored in storage/commits
, storage/fs
, storage/blocks
under this path.
Currently file system, S3 and Swift backends are supported. Ceph/RADOS is also supported since version 7.0.14
"},{"location":"setup/setup_with_multiple_storage_backends/#library-mapping-policies","title":"Library Mapping Policies","text":"Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases. The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later.
Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py:
ENABLE_STORAGE_CLASSES = True\n
"},{"location":"setup/setup_with_multiple_storage_backends/#user-chosen","title":"User Chosen","text":"This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file.
To use this policy, add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\n
If you enable storage class support but don't explicitly set STORAGE_CLASS_MAPPING_POLIICY
in seahub_settings.py, this policy is used by default.
Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles.
A new option storage_ids
is added to the role configuration in seahub_settings.py
to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
Here are the sample options in seahub_settings.py to use this policy:
ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'ROLE_BASED'\n\nENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': ['old_version_id', 'hot_storage', 'cold_storage', 'a_storage'],\n },\n 'guest': {\n 'can_add_repo': True,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': ['hot_storage', 'cold_storage'],\n },\n}\n
"},{"location":"setup/setup_with_multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes.
Note
This policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning.
To use this policy, you first add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'REPO_ID_MAPPING'\n
Then you can add option for_new_library
to the backends which are expected to store new libraries in json file:
[\n{\n\"storage_id\": \"new_backend\",\n\"name\": \"New store\",\n\"for_new_library\": true,\n\"is_default\": false,\n\"fs\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"commits\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"blocks\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"}\n}\n]\n
"},{"location":"setup/setup_with_multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"Run the migrate-repo.sh
script to migrate library data between different storage backends.
./migrate-repo.sh [repo_id] origin_storage_id destination_storage_id\n
repo_id is optional, if not specified, all libraries will be migrated.
Before running the migration script, you can set the OBJECT_LIST_FILE_PATH
environment variable to specify a path prefix to store the migrated object list.
For example:
export OBJECT_LIST_FILE_PATH=/opt/test\n
This will create three files in the specified path (/opt): test_4c731e5c-f589-4eaa-889f-14c00d4893cb.fs
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocks
Setting the OBJECT_LIST_FILE_PATH
environment variable has two purposes:
Run the remove-objs.sh
script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
./remove-objs.sh repo_id storage_id\n
"},{"location":"setup/setup_with_s3/","title":"Setup With S3 Storage","text":"Deployment notes
If your Seafile server is deployed from binary packages, you have to do the following steps before deploying:
install boto3
to your machine
sudo pip install boto3\n
Install and configure memcached or Redis.
For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.
The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. You also need to add memory cache configurations
New feature from 12.0 pro edition
If your will deploy Seafile server in Docker, you can modify the following fields in .env
before starting the services:
INIT_S3_STORAGE_BACKEND_CONFIG=true\nINIT_S3_COMMIT_BUCKET=<your-commit-objects>\nINIT_S3_FS_BUCKET=<your-fs-objects>\nINIT_S3_BLOCK_BUCKET=<your-block-objects>\nINIT_S3_KEY_ID=<your-key-id>\nINIT_S3_SECRET_KEY=<your-secret-key>\nINIT_S3_USE_V4_SIGNATURE=true\nINIT_S3_AWS_REGION=us-east-1 # your AWS Region\nINIT_S3_HOST=s3.us-east-1.amazonaws.com # your S3 Host\nINIT_S3_USE_HTTPS=true\n
The above modifications will generate the same configuration file as this manual and will take effect when the service is started for the first time.
"},{"location":"setup/setup_with_s3/#how-to-configure-s3-in-seafile","title":"How to configure S3 in Seafile","text":"Seafile configures S3 storage by adding or modifying the following section in seafile.conf
:
[xxx_object_backend]\nname = s3\nbucket = my-xxx-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\nuse_https = true\n... ; other optional configurations\n
You have to create at least 3 buckets for Seafile, corresponding to the sections: commit_object_backend
, fs_object_backend
and block_backend
. For the configurations for each backend section, please refer to the following table:
bucket
Bucket name for commit, fs, and block objects. Make sure it follows S3 naming rules (you can refer the notes below the table). key_id
The key_id
is required to authenticate you to S3. You can find the key_id
in the \"security credentials\" section on your AWS account page or from your storage provider. key
The key
is required to authenticate you to S3. You can find the key
in the \"security credentials\" section on your AWS account page or from your storage provider. use_v4_signature
There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. use_https
Use https to connect to S3. It's recommended to use https. aws_region
(Optional) If you use the v4 protocol and AWS S3, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use us-east-1
as the default. This option will be ignored if you use the v2 protocol. host
(Optional) The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address if you use storage provider other than AWS, otherwise Seafile will use AWS's address (i.e., s3.us-east-1.amazonaws.com
). sse_c_key
(Optional) A string of 32 characters can be generated by openssl rand -base64 24
. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. path_style_request
(Optional) This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object
to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object
. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true for self-hosted storage. Bucket naming conventions
No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).
Good naming of a bucketBad naming of a bucketSince Pro 11.0, you can use SSE-C to S3. Add the following sse_c_key
to seafile.conf (as shown in the above variables table):
[commit_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n
sse_c_key
is a string of 32 characters.
You can generate sse_c_key with the following command. Note that the key doesn't have to be base64 encoded. It can be any 32-character long random string. The example just show one possible way to generate such a key.
openssl rand -base64 24\n
Warning
If you have existing data in your S3 storage bucket, turning on the above configuration will make your data inaccessible. That's because Seafile server doesn't support encrypted and non-encrypted objects mixed in the same bucket. You have to create a new bucket, and migrate your data to it by following storage backend migration documentation.
"},{"location":"setup/setup_with_s3/#example","title":"Example","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n
[commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n
[commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n
"},{"location":"setup/setup_with_s3/#run-and-test","title":"Run and Test","text":"Now you can start Seafile and test
"},{"location":"setup/setup_with_swift/","title":"Setup With OpenStack Swift","text":"This backend uses the native Swift API. Previously users can only use the S3-compatibility layer of Swift. That way is obsolete now.
Since version 6.3, OpenStack Swift v3.0 API is supported.
"},{"location":"setup/setup_with_swift/#prepare","title":"Prepare","text":"To setup Seafile Professional Server with Swift:
Edit seafile.conf
, add the following lines:
[block_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-blocks\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[commit_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-commits\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[fs_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-fs\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n
You also need to add memory cache configurations
The above config is just an example. You should replace the options according to your own environment.
Seafile supports Swift with Keystone as authentication mechanism. The auth_host
option is the address and port of Keystone service.The region
option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.
Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The auth_ver
option should be set to v1.0
, tenant
and region
are no longer needed.
It's required to create separate containers for commit, fs, and block objects.
"},{"location":"setup/setup_with_swift/#use-https-connections-to-swift","title":"Use HTTPS connections to Swift","text":"Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf:
[commit_object_backend]\nname = swift\n......\nuse_https = true\n\n[fs_object_backend]\nname = swift\n......\nuse_https = true\n\n[block_backend]\nname = swift\n......\nuse_https = true\n
Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail.
sudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n
"},{"location":"setup/setup_with_swift/#run-and-test","title":"Run and Test","text":"Now you can start Seafile by ./seafile.sh start
and ./seahub.sh start
and visit the website.
Since Seafile 12.0, all reverse proxy, HTTPS, etc. processing for single-node deployment based on Docker is handled by caddy. If you need to use other reverse proxy services, you can refer to this document to modify the relevant configuration files.
"},{"location":"setup/use_other_reverse_proxy/#services-that-require-reverse-proxy","title":"Services that require reverse proxy","text":"Before making changes to the configuration files, you have to know the services used by Seafile and related components (Table 1 therafter).
Tip
The services shown in the table below are all based on the single-node integrated deployment in accordance with the Seafile official documentation.
If these services are deployed in standalone mode (such as seadoc and notification-server), or deployed in the official documentation of third-party plugins (such as onlyoffice and collabora), you can skip modifying the configuration files of these services (because Caddy is not used as a reverse proxy for such deployment approaches).
If you have not integrated some services, please choose Standalone or Refer to the official documentation of third-party plugins to install them when you need these services
YML Service Suggest exposed port Service listen port Require WebSocketseafile-server.yml
seafile 80 80 No seadoc.yml
seadoc 8888 80 Yes notification-server.yml
notification-server 8083 8083 Yes collabora.yml
collabora 6232 9980 No onlyoffice.yml
onlyoffice 6233 80 No"},{"location":"setup/use_other_reverse_proxy/#modify-yml-files","title":"Modify YML files","text":"Refer to Table 1 for the related service exposed ports. Add section ports
for corresponding services
services:\n <the service need to be modified>:\n ...\n ports:\n - \"<Suggest exposed port>:<Service listen port>\"\n
Delete all fields related to Caddy reverse proxy (in label
section)
Tip
Some .yml
files (e.g., onlyoffice.yml
) also have port-exposing information with Caddy in the top of the file, which also needs to be removed.
We take seafile-server.yml
for example (Pro edition):
services:\n # ... other services\n\n seafile:\n image: ${SEAFILE_IMAGE:-seafileltd/seafile-pro-mc:12.0-latest}\n container_name: seafile\n ports:\n - \"80:80\"\n volumes:\n - ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared\n environment:\n - DB_HOST=${SEAFILE_MYSQL_DB_HOST:-db}\n - DB_PORT=${SEAFILE_MYSQL_DB_PORT:-3306}\n - DB_ROOT_PASSWD=${INIT_SEAFILE_MYSQL_ROOT_PASSWORD:-}\n - DB_PASSWORD=${SEAFILE_MYSQL_DB_PASSWORD:?Variable is not set or empty}\n - SEAFILE_MYSQL_DB_CCNET_DB_NAME=${SEAFILE_MYSQL_DB_CCNET_DB_NAME:-ccnet_db}\n - SEAFILE_MYSQL_DB_SEAFILE_DB_NAME=${SEAFILE_MYSQL_DB_SEAFILE_DB_NAME:-seafile_db}\n - SEAFILE_MYSQL_DB_SEAHUB_DB_NAME=${SEAFILE_MYSQL_DB_SEAHUB_DB_NAME:-seahub_db}\n - TIME_ZONE=${TIME_ZONE:-Etc/UTC}\n - INIT_SEAFILE_ADMIN_EMAIL=${INIT_SEAFILE_ADMIN_EMAIL:-me@example.com}\n - INIT_SEAFILE_ADMIN_PASSWORD=${INIT_SEAFILE_ADMIN_PASSWORD:-asecret}\n - SEAFILE_SERVER_HOSTNAME=${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}\n - SEAFILE_SERVER_PROTOCOL=${SEAFILE_SERVER_PROTOCOL:-http}\n - SITE_ROOT=${SITE_ROOT:-/}\n - NON_ROOT=${NON_ROOT:-false}\n - JWT_PRIVATE_KEY=${JWT_PRIVATE_KEY:?Variable is not set or empty}\n - ENABLE_SEADOC=${ENABLE_SEADOC:-false}\n - SEADOC_SERVER_URL=${SEADOC_SERVER_URL:-http://example.example.com/sdoc-server}\n - INIT_S3_STORAGE_BACKEND_CONFIG=${INIT_S3_STORAGE_BACKEND_CONFIG:-false}\n - INIT_S3_COMMIT_BUCKET=${INIT_S3_COMMIT_BUCKET:-}\n - INIT_S3_FS_BUCKET=${INIT_S3_FS_BUCKET:-}\n - INIT_S3_BLOCK_BUCKET=${INIT_S3_BLOCK_BUCKET:-}\n - INIT_S3_KEY_ID=${INIT_S3_KEY_ID:-}\n - INIT_S3_SECRET_KEY=${INIT_S3_SECRET_KEY:-}\n - INIT_S3_USE_V4_SIGNATURE=${INIT_S3_USE_V4_SIGNATURE:-true}\n - INIT_S3_AWS_REGION=${INIT_S3_AWS_REGION:-us-east-1}\n - INIT_S3_HOST=${INIT_S3_HOST:-us-east-1}\n - INIT_S3_USE_HTTPS=${INIT_S3_USE_HTTPS:-true}\n # please remove the label section\n depends_on:\n - db\n - memcached\n - elasticsearch\n networks:\n - seafile-net\n\n# ... other options\n
"},{"location":"setup/use_other_reverse_proxy/#add-reverse-proxy-for-related-services","title":"Add reverse proxy for related services","text":"Modify nginx.conf
and add reverse proxy for services seafile and seadoc:
Note
If your proxy server's host is not the same as the host the Seafile deployed to, please replase 127.0.0.1
to your Seafile server's host
location / {\n proxy_pass http://127.0.0.1:80;\n proxy_read_timeout 310s;\n proxy_set_header Host $host;\n proxy_set_header Forwarded \"for=$remote_addr;proto=$scheme\";\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header Connection \"\";\n proxy_http_version 1.1;\n\n client_max_body_size 0;\n}\n
location /sdoc-server/ {\n proxy_pass http://127.0.0.1:8888/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n client_max_body_size 100m;\n}\n\nlocation /socket.io {\n proxy_pass http://127.0.0.1:8888/;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n}\n
"},{"location":"setup/use_other_reverse_proxy/#modify-env","title":"Modify .env","text":"Remove caddy.yml
from field COMPOSE_FILE
in .env
, e.g.
COMPOSE_FILE='seafile-server.yml' # remove caddy.yml\n
"},{"location":"setup/use_other_reverse_proxy/#restart-services-and-nginx","title":"Restart services and nginx","text":"docker compose down\ndocker compose up -d\nnginx restart\n
"},{"location":"setup_binary/deploy_in_a_cluster/","title":"Deploy in a cluster","text":"Tip
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
"},{"location":"setup_binary/deploy_in_a_cluster/#architecture","title":"Architecture","text":"The Seafile cluster solution employs a 3-tier architecture:
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later.
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it. More details can be found in background server setup.
All Seafile app servers access the same set of user data. The user data has two parts: One in the MySQL database and the other one in the backend storage cluster (S3, Ceph etc.). All app servers serve the data equally to the clients.
All app servers have to connect to the same database or database cluster. We recommend to use MariaDB Galera Cluster if you need a database cluster.
There are a few steps to deploy a Seafile cluster:
At least 3 Linux server with at least 4 cores, 8GB RAM. Two servers work as frontend servers, while one server works as background task server. Virtual machines are sufficient for most cases.
In small cluster, you can re-use the 3 Seafile servers to run memcached cluster and MariaDB cluster. For larger clusters, you can have 3 more dedicated server to run memcached cluster and MariaDB cluster. Because the load on these two clusters are not high, they can share the hardware to save cost. Documentation about how to setup memcached cluster and MariaDB cluster can be found here.
Since version 11.0, Redis can also be used as memory cache server. But currently only single-node Redis is supported.
"},{"location":"setup_binary/deploy_in_a_cluster/#install-python-libraries","title":"Install Python libraries","text":"On each mode, you need to install some python libraries.
First make sure your have installed Python 2.7, then:
sudo easy_install pip\nsudo pip install boto\n
If you receive an error stating \"Wheel installs require setuptools >= ...\", run this between the pip and boto lines above
sudo pip install setuptools --no-use-wheel --upgrade\n
"},{"location":"setup_binary/deploy_in_a_cluster/#configure-a-single-node","title":"Configure a Single Node","text":"You should make sure the config files on every Seafile server are consistent.
"},{"location":"setup_binary/deploy_in_a_cluster/#get-the-license","title":"Get the license","text":"Put the license you get under the top level diretory. In our wiki, we use the diretory /data/haiwen/
as the top level directory.
tar xf seafile-pro-server_8.0.0_x86-64.tar.gz\n
Now you have:
haiwen\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.0/\n
"},{"location":"setup_binary/deploy_in_a_cluster/#setup-seafile","title":"Setup Seafile","text":"Please follow Download and Setup Seafile Professional Server With MySQL to setup a single Seafile server node.
Use the load balancer's address or domain name for the server address. Don't use the local IP address of each Seafile server machine. This assures the user will always access your service via the load balancers
After the setup process is done, you still have to do a few manual changes to the config files.
"},{"location":"setup_binary/deploy_in_a_cluster/#seafileconf","title":"seafile.conf","text":"If you use a single memcached server, you have to add the following configuration to seafile.conf
[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=192.168.1.134 --POOL-MIN=10 --POOL-MAX=100\n
If you use memcached cluster, the recommended way to setup memcached clusters can be found here.
You'll setup two memcached server, in active/standby mode. A floating IP address will be assigned to the current active memcached node. So you have to configure the address in seafile.conf accordingly.
[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<floating IP address> --POOL-MIN=10 --POOL-MAX=100\n
If you are using Redis as cache, add following configurations:
[cluster]\nenabled = true\n\n[redis]\n# your redis server address\nredis_server = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n
Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
(Optional) The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config option to seafile.conf
[cluster]\nhealth_check_port = 12345\n
"},{"location":"setup_binary/deploy_in_a_cluster/#seahub_settingspy","title":"seahub_settings.py","text":"You must setup and use memory cache when deploying Seafile cluster. Refer to \"memory cache\" to configure memory cache in Seahub.
Also add following options to seahub_setting.py. These settings tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n
"},{"location":"setup_binary/deploy_in_a_cluster/#seafeventsconf","title":"seafevents.conf","text":"Here is an example [INDEX FILES]
section:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is only available for Seafile 6.3.0 pro and above.\nindex_office_pdf = true\nes_host = background.seafile.com\nes_port = 9200\n
Tip
enable = true
should be left unchanged. It means the file search feature is enabled.
In cluster environment, we have to store avatars in the database instead of in a local disk.
CREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n
"},{"location":"setup_binary/deploy_in_a_cluster/#backend-storage-settings","title":"Backend Storage Settings","text":"You also need to add the settings for backend cloud storage systems to the config files.
Nginx/Apache with HTTP need to set it up on each machine running Seafile server. This is make sure only port 80 need to be exposed to load balancer. (HTTPS should be setup at the load balancer)
Please check the following documents on how to setup HTTP with Nginx/Apache. (HTTPS is not needed)
Once you have finished configuring this single node, start it to test if it runs properly:
cd /data/haiwen/seafile-server-latest\n./seafile.sh start\n./seahub.sh start\n
Success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server.
Open your browser, visit http://ip-address-of-this-node:80
and login with the admin account.
Now you have one node working fine, let's continue to configure more nodes.
"},{"location":"setup_binary/deploy_in_a_cluster/#copy-the-config-to-all-seafile-servers","title":"Copy the config to all Seafile servers","text":"Supposed your Seafile installation directory is /data/haiwen
, compress this whole directory into a tarball and copy the tarball to all other Seafile server machines. You can simply uncompress the tarball and use it.
On each node, run ./seafile.sh
and ./seahub.sh
to start Seafile server.
In the backend node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server.
export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n
"},{"location":"setup_binary/deploy_in_a_cluster/#start-seafile-service-on-boot","title":"Start Seafile Service on boot","text":"It would be convenient to setup Seafile service to start on system boot. Follow this documentation to set it up on all nodes.
"},{"location":"setup_binary/deploy_in_a_cluster/#firewall-settings","title":"Firewall Settings","text":"There are 2 firewall rule changes for Seafile cluster:
Now that your cluster is already running, fire up the load balancer and welcome your users. Since version 6.0.0, Seafile Pro requires \"sticky session\" settings in the load balancer. You should refer to the manual of your load balancer for how to set up sticky sessions.
"},{"location":"setup_binary/deploy_in_a_cluster/#aws-elastic-load-balancer-elb","title":"AWS Elastic Load Balancer (ELB)","text":"In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
Refer to AWS documentation about how to setup sticky sessions.
"},{"location":"setup_binary/deploy_in_a_cluster/#haproxy","title":"HAProxy","text":"This is a sample /etc/haproxy/haproxy.cfg
:
(Assume your health check port is 11001
)
global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n
"},{"location":"setup_binary/deploy_in_a_cluster/#see-how-it-runs","title":"See how it runs","text":"Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients.
If the above works, the next step would be Enable search and background tasks in a cluster.
"},{"location":"setup_binary/deploy_in_a_cluster/#the-final-configuration-of-the-front-end-nodes","title":"The final configuration of the front-end nodes","text":"Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For seafile.conf:
[cluster]\nenabled = true\nmemcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100\n
The enabled
option will prevent the start of background tasks by ./seafile.sh start
in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start
at the back-end node.
For seahub_settings.py:
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n
For seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is for improving searching speed\nes_host = <IP of background node>\nes_port = 9200\n
The [INDEX FILES]
section is needed to let the front-end node know the file search feature is enabled.
In the seafile cluster, only one server should run the background tasks, including:
Let's assume you have three nodes in your cluster: A, B, and C.
If you following the steps on settings up a cluster, node B and node C should have already be configed as frontend node. You can copy the configuration of node B as a base for node A. Then do the following steps:
Since 9.0, ElasticSearch program is not part of Seafile package. You should deploy ElasticSearch service seperately. Then edit seafevents.conf
, add the following lines:
[INDEX FILES]\nenabled = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n
Edit seafile.conf to enable virus scan according to virus scan document
"},{"location":"setup_binary/enable_search_and_background_tasks_in_a_cluster/#configure-other-nodes","title":"Configure Other Nodes","text":"On nodes B and C, you need to:
Edit seafevents.conf
, add the following lines:
[INDEX FILES]\nenabled = true\nes_host = <ip of elastic search service>\nes_port = 9200\n
"},{"location":"setup_binary/enable_search_and_background_tasks_in_a_cluster/#start-the-background-node","title":"Start the background node","text":"Type the following commands to start the background node (Note, one additional command seafile-background-tasks.sh
is needed)
export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n
To stop the background node, type:
./seafile-background-tasks.sh stop\n./seafile.sh stop\n
You should also configure Seafile background tasks to start on system bootup. For systemd based OS, you can add /etc/systemd/system/seafile-background-tasks.service
:
[Unit]\nDescription=Seafile Background Tasks Server\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh start\nExecStop=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh stop\nUser=root\nGroup=root\n\n[Install]\nWantedBy=multi-user.target\n
Then enable this task in systemd:
systemctl enable seafile-background-tasks.service\n
"},{"location":"setup_binary/enable_search_and_background_tasks_in_a_cluster/#the-final-configuration-of-the-background-node","title":"The final configuration of the background node","text":"Here is the summary of configurations at the background node that related to clustering setup.
For seafile.conf:
[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<you memcached server host> --POOL-MIN=10 --POOL-MAX=100\n
For seafevents.conf:
[INDEX FILES]\nenabled = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n
"},{"location":"setup_binary/fail2ban/","title":"seafile-authentication-fail2ban","text":""},{"location":"setup_binary/fail2ban/#what-is-fail2ban","title":"What is fail2ban ?","text":"Fail2ban is an intrusion prevention software framework which protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.
(Definition from wikipedia - https://en.wikipedia.org/wiki/Fail2ban)
"},{"location":"setup_binary/fail2ban/#why-do-i-need-to-install-this-fail2bans-filter","title":"Why do I need to install this fail2ban's filter ?","text":"To protect your seafile website against brute force attemps. Each time a user/computer tries to connect and fails 3 times, a new line will be write in your seafile logs (seahub.log
).
Fail2ban will check this log file and will ban all failed authentications with a new rule in your firewall.
"},{"location":"setup_binary/fail2ban/#installation","title":"Installation","text":""},{"location":"setup_binary/fail2ban/#change-to-right-time-zone-in-seahub_settingspy","title":"Change to right Time Zone in seahub_settings.py","text":"Without this your Fail2Ban filter will not work
You need to add the following settings to seahub_settings.py but change it to your own time zone.
# TimeZone\n TIME_ZONE = 'Europe/Stockholm'\n
"},{"location":"setup_binary/fail2ban/#copy-and-edit-jaillocal-file","title":"Copy and edit jail.local file","text":"this file may override some parameters from your jail.conf
file
Edit jail.local
with : * ports used by your seafile website (e.g. http,https
) ; * logpath (e.g. /home/yourusername/logs/seahub.log
) ; * maxretry (default to 3 is equivalent to 9 real attemps in seafile, because one line is written every 3 failed authentications into seafile logs).
jail.local
in /etc/fail2ban
with the following content:","text":"# All standard jails are in the file configuration located\n# /etc/fail2ban/jail.conf\n\n# Warning you may override any other parameter (e.g. banaction,\n# action, port, logpath, etc) in that section within jail.local\n\n# Change logpath with your file log used by seafile (e.g. seahub.log)\n# Also you can change the max retry var (3 attemps = 1 line written in the\n# seafile log)\n# So with this maxrety to 1, the user can try 3 times before his IP is banned\n\n[seafile]\n\nenabled = true\nport = http,https\nfilter = seafile-auth\nlogpath = /home/yourusername/logs/seahub.log\nmaxretry = 3\n
"},{"location":"setup_binary/fail2ban/#create-the-fail2ban-filter-file-seafile-authconf-in-etcfail2banfilterd-with-the-following-content","title":"Create the fail2ban filter file seafile-auth.conf
in /etc/fail2ban/filter.d
with the following content:","text":"# Fail2Ban filter for seafile\n#\n\n[INCLUDES]\n\n# Read common prefixes. If any customizations available -- read them from\n# common.local\nbefore = common.conf\n\n[Definition]\n\n_daemon = seaf-server\n\nfailregex = Login attempt limit reached.*, ip: <HOST>\n\nignoreregex = \n\n# DEV Notes:\n#\n# pattern : 2015-10-20 15:20:32,402 [WARNING] seahub.auth.views:155 login Login attempt limit reached, username: <user>, ip: 1.2.3.4, attemps: 3\n# 2015-10-20 17:04:32,235 [WARNING] seahub.auth.views:163 login Login attempt limit reached, ip: 1.2.3.4, attempts: 3\n
"},{"location":"setup_binary/fail2ban/#restart-fail2ban","title":"Restart fail2ban","text":"Finally, just restart fail2ban and check your firewall (iptables for me) :
sudo fail2ban-client reload\nsudo iptables -S\n
Fail2ban will create a new chain for this jail. So you should see these new lines :
...\n-N fail2ban-seafile\n...\n-A fail2ban-seafile -j RETURN\n
"},{"location":"setup_binary/fail2ban/#tests","title":"Tests","text":"To do a simple test (but you have to be an administrator on your seafile server) go to your seafile webserver URL and try 3 authentications with a wrong password.
Actually, when you have done that, you are banned from http and https ports in iptables, thanks to fail2ban.
To check that :
on fail2ban
denis@myserver:~$ sudo fail2ban-client status seafile\nStatus for the jail: seafile\n|- filter\n| |- File list: /home/<youruser>/logs/seahub.log\n| |- Currently failed: 0\n| `- Total failed: 1\n`- action\n |- Currently banned: 1\n | `- IP list: 1.2.3.4\n `- Total banned: 1\n
on iptables :
sudo iptables -S\n\n...\n-A fail2ban-seafile -s 1.2.3.4/32 -j REJECT --reject-with icmp-port-unreachable\n...\n
To unban your IP address, just execute this command :
sudo fail2ban-client set seafile unbanip 1.2.3.4\n
Tip
As three (3) failed attempts to login will result in one line added in seahub.log a Fail2Ban jail with the settings maxretry = 3 is the same as nine (9) failed attempts to login.
"},{"location":"setup_binary/https_with_apache/","title":"Enabling HTTPS with Apache","text":"After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Apache, a popular web server and reverse proxy, is a good option. The full documentation of Apache is available at https://httpd.apache.org/docs/.
The recommended reverse proxy is Nginx. You find instructions for enabling HTTPS with Nginx here.
"},{"location":"setup_binary/https_with_apache/#setup","title":"Setup","text":"The setup of Seafile using Apache as a reverse proxy with HTTPS is demonstrated using the sample host name seafile.example.com
.
This manual assumes the following requirements:
If your setup differs from thes requirements, adjust the following instructions accordingly.
The setup proceeds in two steps: First, Apache is installed. Second, a SSL certificate is integrated in the Apache configuration.
"},{"location":"setup_binary/https_with_apache/#installing-apache","title":"Installing Apache","text":"Install and enable apache modules:
# Ubuntu\n$ sudo a2enmod rewrite\n$ sudo a2enmod proxy_http\n
Important: Due to the security advisory published by Django team, we recommend to disable GZip compression to mitigate BREACH attack. No version earlier than Apache 2.4 should be used.
"},{"location":"setup_binary/https_with_apache/#configuring-apache","title":"Configuring Apache","text":"Modify Apache config file. For CentOS, this is vhost.conf.
For Debian/Ubuntu, this is sites-enabled/000-default
.
<VirtualHost *:80>\n ServerName seafile.example.com\n # Use \"DocumentRoot /var/www/html\" for CentOS\n # Use \"DocumentRoot /var/www\" for Debian/Ubuntu\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n AllowEncodedSlashes On\n\n RewriteEngine On\n\n <Location /media>\n Require all granted\n </Location>\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\n
"},{"location":"setup_binary/https_with_apache/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your web server and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Apache configuration yourself:
sudo certbot --apache certonly\n
Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live
. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com
.
To use HTTPS, you need to enable mod_ssl:
$ sudo a2enmod ssl\n
Then modify your Apache configuration file. Here is a sample:
<VirtualHost *:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n\n SSLEngine On\n SSLCertificateFile /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n SSLCertificateKeyFile /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n <Location /media>\n Require all granted\n </Location>\n\n RewriteEngine On\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\n
Finally, make sure the virtual host file does not contain syntax errors and restart Apache for the configuration changes to take effect:
sudo service apache2 restart\n
"},{"location":"setup_binary/https_with_apache/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":"The SERVICE_URL
in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URL
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http://
must not be removed):
SERVICE_URL = 'https://seafile.example.com'\n
The FILE_SERVER_ROOT
in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOT
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp
must not be removed):
FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\n
Note: The SERVICE_URL
and FILE_SERVER_ROOT
can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.
To improve security, the file server should only be accessible via Apache.
Add the following line in the [fileserver] block on seafile.conf
in /opt/seafile/conf
:
host = 127.0.0.1 ## default port 0.0.0.0\n
After his change, the file server only accepts requests from Apache.
"},{"location":"setup_binary/https_with_apache/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"Restart the seaf-server and Seahub for the config changes to take effect:
$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart\n
"},{"location":"setup_binary/https_with_apache/#troubleshooting","title":"Troubleshooting","text":"If there are problems with paths or files containing spaces, make sure to have at least Apache 2.4.12.
References
After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/.
If you prefer Apache, you find instructions for enabling HTTPS with Apache here.
"},{"location":"setup_binary/https_with_nginx/#setup","title":"Setup","text":"The setup of Seafile using Nginx as a reverse proxy with HTTPS is demonstrated using the sample host name seafile.example.com
.
This manual assumes the following requirements:
If your setup differs from thes requirements, adjust the following instructions accordingly.
The setup proceeds in two steps: First, Nginx is installed. Second, a SSL certificate is integrated in the Nginx configuration.
"},{"location":"setup_binary/https_with_nginx/#installing-nginx","title":"Installing Nginx","text":"Install Nginx using the package repositories:
CentOSDebian$ sudo yum install nginx -y\n
$ sudo apt install nginx -y\n
After the installation, start the server and enable it so that Nginx starts at system boot:
$ sudo systemctl start nginx\n$ sudo systemctl enable nginx\n
"},{"location":"setup_binary/https_with_nginx/#preparing-nginx","title":"Preparing Nginx","text":"The configuration of a proxy server in Nginx differs slightly between CentOS and Debian/Ubuntu. Additionally, the restrictive default settings of SELinux's configuration on CentOS require a modification.
"},{"location":"setup_binary/https_with_nginx/#preparing-nginx-on-centos","title":"Preparing Nginx on CentOS","text":"Switch SELinux into permissive mode and perpetuate the setting:
$ sudo setenforce permissive\n$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config\n
Create a configuration file for seafile in /etc/nginx/conf.d
:
$ touch /etc/nginx/conf.d/seafile.conf\n
"},{"location":"setup_binary/https_with_nginx/#preparing-nginx-on-debianubuntu","title":"Preparing Nginx on Debian/Ubuntu","text":"Create a configuration file for seafile in /etc/nginx/sites-available/
:
$ touch /etc/nginx/sites-available/seafile.conf\n
Delete the default files in /etc/nginx/sites-enabled/
and /etc/nginx/sites-available
:
$ rm /etc/nginx/sites-enabled/default\n$ rm /etc/nginx/sites-available/default\n
Create a symbolic link:
$ ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/seafile.conf\n
"},{"location":"setup_binary/https_with_nginx/#configuring-nginx","title":"Configuring Nginx","text":"Copy the following sample Nginx config file into the just created seafile.conf
and modify the content to fit your needs:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n
The following options must be modified in the CONF file:
Optional customizable options in the seafile.conf are:
listen
) - if Seafile server should be available on a non-standard port/
- if Seahub is configured to start on a different port than 8000/seafhttp
- if seaf-server is configured to start on a different port than 8082client_max_body_size
)The default value for client_max_body_size
is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size
in section [fileserver]
of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
$ nginx -t\n$ nginx -s reload\n
"},{"location":"setup_binary/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your webserver and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Nginx configuration yourself:
$ sudo certbot certonly --nginx\n
Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live
. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com
.
Add an server block for port 443 and a http-to-https redirect to the seafile.conf
configuration file in /etc/nginx
.
This is a (shortened) sample configuration for the host name seafile.example.com:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n
"},{"location":"setup_binary/https_with_nginx/#large-file-uploads","title":"Large file uploads","text":"Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file:
location /seafhttp {\n ... ...\n proxy_request_buffering off;\n }\n
If you have WebDAV enabled it is recommended to add the same:
location /seafdav {\n ... ...\n proxy_request_buffering off;\n }\n
"},{"location":"setup_binary/https_with_nginx/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":"The SERVICE_URL
in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URL
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http://
must not be removed):
SERVICE_URL = 'https://seafile.example.com'\n
The FILE_SERVER_ROOT
in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOT
so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp
must not be removed):
FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\n
Note: The SERVICE_URL
and FILE_SERVER_ROOT
can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.
To improve security, the file server should only be accessible via Nginx.
Add the following line in the [fileserver]
block on seafile.conf
in /opt/seafile/conf
:
host = 127.0.0.1 ## default port 0.0.0.0\n
After his change, the file server only accepts requests from Nginx.
"},{"location":"setup_binary/https_with_nginx/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"Restart the seaf-server and Seahub for the config changes to take effect:
$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n
"},{"location":"setup_binary/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"setup_binary/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage.
listen 443;\nlisten [::]:443;\n
"},{"location":"setup_binary/https_with_nginx/#activating-http2","title":"Activating HTTP2","text":"Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add http2
.
listen 443 http2;\nlisten [::]:443 http2;\n
"},{"location":"setup_binary/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in seafile.conf
, this rating can be significantly improved.
The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below.
server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\n
"},{"location":"setup_binary/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":"Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive:
add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n
HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site.
"},{"location":"setup_binary/https_with_nginx/#using-perfect-forward-secrecy","title":"Using Perfect Forward Secrecy","text":"Enable Diffie-Hellman (DH) key-exchange. Generate DH parameters and write them in a .pem file using the following command:
$ openssl dhparam 2048 > /etc/nginx/dhparam.pem # Generates DH parameter of length 2048 bits\n
The generation of the the DH parameters may take some time depending on the server's processing power.
Add the following directive in the HTTPS server block:
ssl_dhparam /etc/nginx/dhparam.pem;\n
"},{"location":"setup_binary/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information.
"},{"location":"setup_binary/installation_ce/","title":"Installation of Seafile Server Community Edition with MySQL/MariaDB","text":"This manual explains how to deploy and run Seafile Server Community Edition (Seafile CE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
"},{"location":"setup_binary/installation_ce/#requirements","title":"Requirements","text":"Seafile CE for x86 architecture requires a minimum of 2 cores and 2GB RAM.
"},{"location":"setup_binary/installation_ce/#setup","title":"Setup","text":""},{"location":"setup_binary/installation_ce/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution.
You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website.
Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on CentOS 8, Debian 10, and Ubuntu 20.04 use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it.
"},{"location":"setup_binary/installation_ce/#installing-prerequisites","title":"Installing prerequisites","text":"Seafile 10.0.xSeafile 11.0.x Ubuntu 22.04/Ubuntu 20.04/Debian 11/Debian 10sudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n
Debian 11/Ubuntu 22.04Debian 12Ubuntu 24.04 with virtual env # Ubuntu 22.04 (almost the same for Ubuntu 20.04 and Debian 11, Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
# Debian 12\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n
"},{"location":"setup_binary/installation_ce/#creating-the-program-directory","title":"Creating the program directory","text":"The standard directory for Seafile's program files is /opt/seafile
. Create this directory and change into it:
sudo mkdir /opt/seafile\ncd /opt/seafile\n
Tip
The program directory can be changed. The standard directory /opt/seafile
is assumed for the rest of this manual. If you decide to put Seafile in another directory, modify the commands accordingly.
It is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
sudo adduser seafile\n
Change ownership of the created directory to the new user:
sudo chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/installation_ce/#downloading-the-install-package","title":"Downloading the install package","text":"Download the install package from the download page on Seafile's website using wget.
We use Seafile CE version 8.0.4 as an example in the rest of this manual.
"},{"location":"setup_binary/installation_ce/#uncompressing-the-package","title":"Uncompressing the package","text":"The install package is downloaded as a compressed tarball which needs to be uncompressed.
Uncompress the package using tar:
tar xf seafile-server_8.0.4_x86-64.tar.gz\n
Now you have:
$ tree -L 2\n.\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-server_8.0.4_x86-64.tar.gz\n
"},{"location":"setup_binary/installation_ce/#setting-up-seafile-ce","title":"Setting up Seafile CE","text":"The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require:
While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being
Run the script as user seafile:
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\ncd seafile-server-8.0.4\n./setup-seafile-mysql.sh\n
Configure your Seafile Server by specifying the following three parameters:
Option Description Note server name Name of the Seafile Server 3-15 characters, only English letters, digits and underscore ('_') are allowed server's ip or domain IP address or domain name used by the Seafile Server Seafile client program will access the server using this address fileserver port TCP port used by the Seafile fileserver Default port is 8082, it is recommended to use this port and to only change it if is used by other serviceIn the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server.
When choosing \"[1] Create new ccnet/seafile/seahub databases\", the script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by the MySQL server Default port is 3306; almost every MySQL server uses this port mysql root password Password of the MySQL root account The root password is required to create new databases and a MySQL user mysql user for Seafile MySQL user created by the script, used by Seafile's components to access the databases Default is seafile; the user is created unless it exists mysql password for Seafile user Password for the user above, written in Seafile's config files Percent sign ('%') is not allowed database name Name of the database used by ccnet Default is \"ccnet_db\", the database is created if it does not exist seafile database name Name of the database used by Seafile Default is \"seafile_db\", the database is created if it does not exist seahub database name Name of the database used by seahub Default is \"seahub_db\", the database is created if it does not existWhen choosing \"[2] Use existing ccnet/seafile/seahub databases\", this are the prompts you need to answer:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by MySQL server Default port is 3306; almost every MySQL server uses this port mysql user for Seafile User used by Seafile's components to access the databases The user must exists mysql password for Seafile user Password for the user above ccnet database name Name of the database used by ccnet, default is \"ccnet_db\" The database must exist seafile database name Name of the database used by Seafile, default is \"seafile_db\" The database must exist seahub dabase name Name of the database used by Seahub, default is \"seahub_db\" The database must existIf the setup is successful, you see the following output:
The directory layout then looks as follows:
$ tree /opt/seafile -L 2\nseafile\n\u251c\u2500\u2500 ccnet\n\u251c\u2500\u2500 conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 seafile-data\n\u2502 \u2514\u2500\u2500 library-template\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502 \u2514\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502 \u2514\u2500\u2500 seaf-fsck.sh\n\u2502 \u2514\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502 \u2514\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502 \u2514\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-server-8.0.6\n\u251c\u2500\u2500 seahub-data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 avatars\n
The folder seafile-server-latest
is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.
Note
If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases ccnet_db
/ seafile_db
/ seahub_db
for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n
"},{"location":"setup_binary/installation_ce/#setup-memory-cache","title":"Setup Memory Cache","text":"Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis.
MemcachedRedisUse the following commands to install memcached and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n
Add the following configuration to seahub_settings.py
.
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
Redis is supported since version 11.0
Install Redis with package installers in your OS.
refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py
.
Seafile's config files as created by the setup script are prepared for Seafile running behind a reverse proxy.
To access Seafile's web interface and to create working sharing links without a reverse proxy, you need to modify two configuration files in /opt/seafile/conf
:
seahub_settings.py
(if you use 9.0.x): Add port 8000 to the SERVICE_URL
(i.e., SERVICE_URL = 'http://1.2.3.4:8000/').ccnet.conf
(if you use 8.0.x or 7.1.x): Add port 8000 to the SERVICE_URL
(i.e., SERVICE_URL = http://1.2.3.4:8000/).gunicorn.conf.py
: Change the bind to \"0.0.0.0:8000\" (i.e., bind = \"0.0.0.0:8000\")Run the following commands in /opt/seafile/seafile-server-latest
:
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\n
Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password.
Now you can access Seafile via the web interface at the host address and port 8000 (e.g., http://1.2.3.4:8000)
Warning
On CentOS, the firewall blocks traffic on port 8000 by default.
"},{"location":"setup_binary/installation_ce/#stopping-and-restarting-seafile-and-seahub","title":"Stopping and Restarting Seafile and Seahub","text":""},{"location":"setup_binary/installation_ce/#stopping","title":"Stopping","text":"./seahub.sh stop # stops seahub\n./seafile.sh stop # stops seaf-server\n
"},{"location":"setup_binary/installation_ce/#restarting","title":"Restarting","text":"# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh restart\n./seahub.sh restart\n
"},{"location":"setup_binary/installation_ce/#enabling-https","title":"Enabling HTTPS","text":"It is strongly recommended to switch from unencrypted HTTP (via port 8000) to encrypted HTTPS (via port 443).
This manual provides instructions for enabling HTTPS for the two most popular web servers and reverse proxies:
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
"},{"location":"setup_binary/installation_pro/#requirements","title":"Requirements","text":"Seafile PE requires a minimum of 2 cores and 2GB RAM. If elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM.
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners.
"},{"location":"setup_binary/installation_pro/#setup","title":"Setup","text":""},{"location":"setup_binary/installation_pro/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"These instructions assume that MySQL/MariaDB server and client are installed and a MySQL/MariaDB root user can authenticate using the mysql_native_password plugin.
"},{"location":"setup_binary/installation_pro/#installing-prerequisites","title":"Installing prerequisites","text":"Seafile 9.0.xSeafile 10.0.xSeafile 11.0.x Ubuntu 20.04/Debian 10/Ubuntu 18.04Centos 8apt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\npip3 install --timeout=3600 django==3.2.* future mysqlclient pymysql Pillow pylibmc \\ \ncaptcha jinja2 sqlalchemy==1.4.3 psd-tools django-pylibmc django-simple-captcha pycryptodome==3.12.0 cffi==1.14.0 lxml\n
sudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==3.2.* Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient pycryptodome==3.12.0 cffi==1.14.0 lxml\n
Ubuntu 22.04/Ubuntu 20.04/Debian 11/Debian 10 apt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n
Ubuntu 22.04/Ubuntu 20.04/Debian 11/Debian 10Debian 12Ubuntu 24.04 with virtual env # on (on , it is almost the same)\napt-get update\napt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap libmysqlclient-dev ldap-utils libldap2-dev dnsutils\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
sudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate
.
# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n
"},{"location":"setup_binary/installation_pro/#installing-java-runtime-environment","title":"Installing Java Runtime Environment","text":"Java Runtime Environment (JRE) is no longer needed in Seafile version 12.0.
"},{"location":"setup_binary/installation_pro/#creating-the-programm-directory","title":"Creating the programm directory","text":"The standard directory for Seafile's program files is /opt/seafile
. Create this directory and change into it:
mkdir /opt/seafile\ncd /opt/seafile\n
The program directory can be changed. The standard directory /opt/seafile
is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly.
Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
adduser seafile\n
Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/installation_pro/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"Save the license file in Seafile's programm directory /opt/seafile
. Make sure that the name is seafile-license.txt
. (If the file has a different name or cannot be read, Seafile PE will not start.)
The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free.
Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 8.0.4 as an example):
The former is suitable for installation on Ubuntu/Debian servers, the latter for CentOS servers.
Download the install package using wget (replace the x.x.x with the version you wish to download):
# Debian/Ubuntu\nwget -O 'seafile-pro-server_x.x.x_x86-64_Ubuntu.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n
We use Seafile version 8.0.4 as an example in the remainder of these instructions.
"},{"location":"setup_binary/installation_pro/#uncompressing-the-package","title":"Uncompressing the package","text":"The install package is downloaded as a compressed tarball which needs to be uncompressed.
Uncompress the package using tar:
# Debian/Ubuntu\ntar xf seafile-pro-server_8.0.4_x86-64_Ubuntu.tar.gz\n
Now you have:
$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-pro-server_8.0.4_x86-64.tar.gz\n
Tip
The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 8.0.4 as an example, the names are as follows:
seafile-server_8.0.4_x86-86.tar.gz
; uncompressing into folder seafile-server-8.0.4
seafile-pro-server_8.0.4_x86-86.tar.gz
; uncompressing into folder seafile-pro-server-8.0.4
The setup process of Seafile PE is the same as the Seafile CE. See Installation of Seafile Server Community Edition with MySQL/MariaDB.
After the successful completition of the setup script, the directory layout of Seafile PE looks as follows (some folders only get created after the first start, e.g. logs
):
For Seafile 7.1.x and later
$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt # license file\n\u251c\u2500\u2500 ccnet \n\u251c\u2500\u2500 conf # configuration files\n\u2502 \u2514\u2500\u2500 ccnet.conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 __pycache__\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafevents.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 logs # log files\n\u251c\u2500\u2500 pids # process id files\n\u251c\u2500\u2500 pro-data # data specific for Seafile PE\n\u251c\u2500\u2500 seafile-data # object database\n\u251c\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-pro-server-8.0.4\n\u251c\u2500\u2500 seahub-data\n \u2514\u2500\u2500 avatars # user avatars\n
"},{"location":"setup_binary/installation_pro/#setup-memory-cache","title":"Setup Memory Cache","text":"Memory cache is mandatory for pro edition. You may use Memcached or Reids as cache server.
MemcachedRedisUse the following commands to install memcached and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n
Add the following configuration to seahub_settings.py
.
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
Redis is supported since version 11.0
Install Redis with package installers in your OS.
refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py
.
You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies:
Run the following commands in /opt/seafile/seafile-server-latest
:
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\n
Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password.
Now you can access Seafile via the web interface at the host address (e.g., http://1.2.3.4:80).
"},{"location":"setup_binary/installation_pro/#enabling-full-text-search","title":"Enabling full text search","text":"Seafile uses the indexing server ElasticSearch to enable full text search.
"},{"location":"setup_binary/installation_pro/#deploying-elasticsearch","title":"Deploying ElasticSearch","text":"Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs.
Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0, 11.0, 12.0 only supports ElasticSearch 8.x.
We use ElasticSearch version 7.16.2 as an example in this section. Version 7.16.2 and newer version have been successfully tested with Seafile.
Pull the Docker image:
sudo docker pull elasticsearch:7.16.2\n
Create a folder for persistent data created by ElasticSearch and change its permission:
sudo mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Now start the ElasticSearch container using the docker run command:
sudo docker run -d \\\n--name es \\\n-p 9200:9200 \\\n-e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" \\\n-e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" \\\n--restart=always \\\n-v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data \\\n-d elasticsearch:8.15.0\n
Security notice
We sincerely thank Mohammed Adel of Safe Decision Co., for the suggestion of this notice.
By default, Elasticsearch will only listen on 127.0.0.1
, but this rule may become invalid after Docker exposes the service port, which will make your Elasticsearch service vulnerable to attackers accessing and extracting sensitive data due to exposure to the external network. We recommend that you manually configure the Docker firewall, such as
sudo iptables -A INPUT -p tcp -s <your seafile server ip> --dport 9200 -j ACCEPT\nsudo iptables -A INPUT -p tcp --dport 9200 -j DROP\n
The above command will only allow the host where your Seafile service is located to connect to Elasticsearch, and other addresses will be blocked. If you deploy Elasticsearch based on binary packages, you need to refer to the official document to set the address that Elasticsearch binds to.
"},{"location":"setup_binary/installation_pro/#modifying-seafevents","title":"Modifying seafevents","text":"Add the following configuration to seafevents.conf
:
[INDEX FILES]\nes_host = your elasticsearch server's IP # IP address of ElasticSearch host\n # use 127.0.0.1 if deployed on the same server\nes_port = 9200 # port of ElasticSearch host\ninterval = 10m # frequency of index updates in minutes\nhighlight = fvh # parameter for improving the search performance\n
Finally, restart Seafile:
./seafile.sh restart && ./seahub.sh restart \n
"},{"location":"setup_binary/memcached_mariadb_cluster/","title":"Setup Memcached Cluster and MariaDB Galera Cluster","text":"For high availability, it is recommended to set up a memcached cluster and MariaDB Galera cluster for Seafile cluster. This documentation will provide information on how to do this with 3 servers. You can either use 3 dedicated servers or use the 3 Seafile server nodes.
"},{"location":"setup_binary/memcached_mariadb_cluster/#setup-memcached-cluster","title":"Setup Memcached Cluster","text":"Seafile servers share session information within memcached. So when you set up a Seafile cluster, there needs to be a memcached server (cluster) running.
The simplest way is to use a single-node memcached server. But when this server fails, some functions in the web UI of Seafile cannot work. So for HA, it's usually desirable to have more than one memcached servers.
We recommend to setup two independent memcached servers, in active/standby mode. A floating IP address (or Virtual IP address in some context) is assigned to the current active node. When the active node goes down, Keepalived will migrate the virtual IP to the standby node. So you actually use a single node memcahced, but use Keepalived (or other alternatives) to provide high availability.
After installing memcahced on each server, you need to make some modification to the memcached config file.
# Under Ubuntu\nvi /etc/memcached.conf\n\n# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default\n# Note that the daemon will grow to this size, but does not start out holding this much\n# memory\n# -m 64\n-m 256\n\n# Specify which IP address to listen on. The default is to listen on all IP addresses\n# This parameter is one of the only security measures that memcached has, so make sure\n# it's listening on a firewalled interface.\n-l 0.0.0.0\n\nservice memcached restart\n
Please configure memcached to start on system startup
Install and configure Keepalived.
# For Ubuntu\nsudo apt-get install keepalived -y\n
Modify keepalived config file /etc/keepalived/keepalived.conf
.
On active node
cat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface ens33\n virtual_router_id 51\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\n
On standby node
cat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface ens33\n virtual_router_id 51\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\n
Please adjust the network device names accordingly. virtual_ipaddress is the floating IP address in use
"},{"location":"setup_binary/memcached_mariadb_cluster/#setup-mariadb-cluster","title":"Setup MariaDB Cluster","text":"MariaDB cluster helps you to remove single point of failure from the cluster architecture. Every update in the database cluster is synchronously replicated to all instances.
You can choose between two different setups:
We refer to the documentation from MariaDB team:
Tip
Seafile doesn't use read/write isolation techniques. So you don't need to setup read and write pools.
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/","title":"Migrate From SQLite to MySQL","text":"Note
The tutorial is only related to Seafile CE edition.
First make sure the python module for MySQL is installed. On Ubuntu/Debian, use sudo apt-get install python-mysqldb
or sudo apt-get install python3-mysqldb
to install it.
Steps to migrate Seafile from SQLite to MySQL:
Stop Seafile and Seahub.
Download sqlite2mysql.sh and sqlite2mysql.py to the top directory of your Seafile installation path. For example, /opt/seafile
.
Run sqlite2mysql.sh
:
chmod +x sqlite2mysql.sh\n./sqlite2mysql.sh\n
This script will produce three files: ccnet-db.sql
, seafile-db.sql
, seahub-db.sql
.
Then create 3 databases ccnet_db, seafile_db, seahub_db and seafile user.
mysql> create database ccnet_db character set = 'utf8';\nmysql> create database seafile_db character set = 'utf8';\nmysql> create database seahub_db character set = 'utf8';\n
Import ccnet data to MySql.
mysql> use ccnet_db;\nmysql> source ccnet-db.sql;\n
Import seafile data to MySql.
mysql> use seafile_db;\nmysql> source seafile-db.sql;\n
Import seahub data to MySql.
mysql> use seahub_db;\nmysql> source seahub-db.sql;\n
ccnet.conf
has been removed since Seafile 12.0
Modify configure files\uff1aAppend following lines to ccnet.conf:
[Database]\nENGINE=mysql\nHOST=127.0.0.1\nPORT = 3306\nUSER=root\nPASSWD=root\nDB=ccnet_db\nCONNECTION_CHARSET=utf8\n
Use 127.0.0.1
, don't use localhost
Replace the database section in seafile.conf
with following lines:
[database]\ntype=mysql\nhost=127.0.0.1\nport = 3306\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\n
Append following lines to seahub_settings.py
:
DATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'USER' : 'root',\n 'PASSWORD' : 'root',\n 'NAME' : 'seahub_db',\n 'HOST' : '127.0.0.1',\n 'PORT': '3306',\n # This is only needed for MySQL older than 5.5.5.\n # For MySQL newer than 5.5.5 INNODB is the default already.\n 'OPTIONS': {\n \"init_command\": \"SET storage_engine=INNODB\",\n }\n }\n}\n
Restart seafile and seahub
Note
User notifications will be cleared during migration due to the slight difference between MySQL and SQLite, if you only see the busy icon when click the notitfications button beside your avatar, please remove user_notitfications
table manually by:
use seahub_db;\ndelete from notifications_usernotification;\n
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#faq","title":"FAQ","text":""},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#encountered-errno-150-foreign-key-constraint-is-incorrectly-formed","title":"Encountered errno: 150 \"Foreign key constraint is incorrectly formed\"
","text":"This error typically occurs because the current table being created contains a foreign key that references a table whose primary key has not yet been created. Therefore, please check the database table creation order in the SQL file. The correct order is:
auth_user\nauth_group\nauth_permission\nauth_group_permissions\nauth_user_groups\nauth_user_user_permissions\n
and post_office_emailtemplate\npost_office_email\npost_office_attachment\npost_office_attachment_emails\n
"},{"location":"setup_binary/outline_ce/","title":"Deploying Seafile","text":"We provide two ways to deploy Seafile services. Docker is the recommended way.
Warning
Since version 12.0, binary based deployment for community edition is deprecated and will not be supported in a future release.
There are two ways to deploy Seafile Pro Edition. Since version 8.0, the recommend way to install Seafile Pro Edition is using Docker.
Seafile Professional Edition SOFTWARE LICENSE AGREEMENT
NOTICE: READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE. BY INSTALLING OR USING THE SOFTWARE, YOU AGREE TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS. IF YOU DO NOT AGREE TO THE FOLLOWING TERMS AND CONDITIONS, DO NOT INSTALL OR USE THE SOFTWARE.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#1-definitions","title":"1. DEFINITIONS","text":"\"Seafile Ltd.\" means Seafile Ltd.
\"You and Your\" means the party licensing the Software hereunder.
\"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#2-grant-of-rights","title":"2. GRANT OF RIGHTS","text":""},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#21-general","title":"2.1 General","text":"The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#22-license-provisions","title":"2.2 License Provisions","text":"Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows:
The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#4-ownership","title":"4. OWNERSHIP","text":"You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#5-confidentiality","title":"5. CONFIDENTIALITY","text":"You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#6-disclaimer-of-warranties","title":"6. DISCLAIMER OF WARRANTIES","text":"EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#7-limitation-of-liability","title":"7. LIMITATION OF LIABILITY","text":"YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#8-indemnification","title":"8. INDEMNIFICATION","text":"You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#9-termination","title":"9. TERMINATION","text":"Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#10-updates-and-support","title":"10. UPDATES AND SUPPORT","text":"Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user.
YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS.
"},{"location":"setup_binary/setup_seafile_cluster_with_nfs/","title":"Setup Seafile cluster with NFS","text":"In a Seafile cluster, one common way to share data among the Seafile server instances is to use NFS. You should only share the files objects (located in seafile-data
folder) and user avatars as well as thumbnails (located in seahub-data
folder) on NFS. Here we'll provide a tutorial about how and what to share.
How to setup nfs server and client is beyond the scope of this wiki. Here are few references:
Supposed your seafile server installation directory is /data/haiwen
, after you run the setup script there should be a seafile-data
and seahub-data
directory in it. And supposed you mount the NFS drive on /seafile-nfs
, you should follow a few steps:
seafile-data
and seahub-data
folder to /seafile-nfs
:mv /data/haiwen/seafile-data /seafile-nfs/\nmv /data/haiwen/seahub-data /seafile-nfs/\n
seafile-data
and seahub-data
folder cd /data/haiwen\nln -s /seafile-nfs/seafile-data /data/haiwen/seafile-data\nln -s /seafile-nfs/seahub-data /data/haiwen/seahub-data\n
This way the instances will share the same seafile-data
and seahub-data
folder. All other config files and log files will remain independent.
For example Debian 12
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory.
sudo vim /opt/seafile/run_with_venv.sh\n
The content of the file is:
#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n
make this script executable sudo chmod 755 /opt/seafile/run_with_venv.sh\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n
The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component","title":"Seahub component","text":"sudo vim /etc/systemd/system/seahub.service\n
The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seahub.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"For example Debian 8 through Debian 11, Linux Ubuntu 15.04 and newer
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component_1","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n
The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component_1","title":"Seahub component","text":"Create systemd service file /etc/systemd/system/seahub.service
sudo vim /etc/systemd/system/seahub.service\n
The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seahub.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-cli-client-optional","title":"Seafile cli client (optional)","text":"Create systemd service file /etc/systemd/system/seafile-client.service
You need to create this service file only if you have seafile console client and you want to run it on system boot.
sudo vim /etc/systemd/system/seafile-client.service\n
The content of the file is:
[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"sudo systemctl enable seafile.service\nsudo systemctl enable seahub.service\nsudo systemctl enable seafile-client.service # optional\n
"},{"location":"setup_binary/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"setup_binary/using_logrotate/#how-it-works","title":"How it works","text":"seaf-server support reopenning logfiles by receiving a SIGUR1
signal.
This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly.
"},{"location":"setup_binary/using_logrotate/#default-logrotate-configuration-directory","title":"Default logrotate configuration directory","text":"For Debian, the default directory for logrotate should be /etc/logrotate.d/
Assuming your seaf-server's logfile is setup to /opt/seafile/logs/seafile.log
and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid
:
The configuration for logrotate could be like this:
/opt/seafile/logs/seafile.log\n/opt/seafile/logs/seahub.log\n/opt/seafile/logs/seafdav.log\n/opt/seafile/logs/fileserver-access.log\n/opt/seafile/logs/fileserver-error.log\n/opt/seafile/logs/fileserver.log\n/opt/seafile/logs/file_updates_sender.log\n/opt/seafile/logs/repo_old_file_auto_del_scan.log\n/opt/seafile/logs/seahub_email_sender.log\n/opt/seafile/logs/index.log\n{\n daily\n missingok\n rotate 7\n # compress\n # delaycompress\n dateext\n dateformat .%Y-%m-%d\n notifempty\n # create 644 root root\n sharedscripts\n postrotate\n if [ -f /opt/seafile/pids/seaf-server.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/seaf-server.pid`\n fi\n\n if [ -f /opt/seafile/pids/fileserver.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/fileserver.pid`\n fi\n\n if [ -f /opt/seafile/pids/seahub.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seahub.pid`\n fi\n\n if [ -f /opt/seafile/pids/seafdav.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seafdav.pid`\n fi\n\n find /opt/seafile/logs/ -mtime +7 -name \"*.log*\" -exec rm -f {} \\;\n endscript\n}\n
You can save this file, in Debian for example, at /etc/logrotate.d/seafile
.
There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-5.1.0\n -- seafile-server-6.1.0\n -- ccnet\n -- seafile-data\n
Now upgrade to version 6.1.0.
Shutdown Seafile server if it's running
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n
Check the upgrade scripts in seafile-server-6.1.0 directory.
cd seafile/seafile-server-6.1.0\nls upgrade/upgrade_*\n
You will get a list of upgrade files:
...\nupgrade_5.0_5.1.sh\nupgrade_5.1_6.0.sh\nupgrade_6.0_6.1.sh\n
Start from your current version, run the script(s one by one)
upgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\n
Start Seafile server
cd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works fine, the old version can be removed
rm -rf seafile-server-5.1.0/\n
"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-6.1.0\n -- seafile-server-6.2.0\n -- ccnet\n -- seafile-data\n
Now upgrade to version 6.2.0.
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n
Check the upgrade scripts in seafile-server-6.2.0 directory.
cd seafile/seafile-server-6.2.0\nls upgrade/upgrade_*\n
You will get a list of upgrade files:
...\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\nupgrade/upgrade_6.1_6.2.sh\n
Start from your current version, run the script(s one by one)
upgrade/upgrade_6.1_6.2.sh\n
Start Seafile server
./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works, the old version can be removed
rm -rf seafile-server-6.1.0/\n
"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3.
For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called minor-upgrade.sh
):
cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.sh\n
Start Seafile
If the new version works, the old version can be removed
rm -rf seafile-server-6.2.2/\n
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
Doing maintanence upgrading is simple, you only need to run the script ./upgrade/minor_upgrade.sh
at each node to update the symbolic link.
In the background node, Seahub no longer need to be started. Nginx is not needed too.
The way of how office converter work is changed. The Seahub in front end nodes directly access a service in background node.
"},{"location":"upgrade/upgrade_a_cluster/#for-front-end-nodes","title":"For front-end nodes","text":"seahub_settings.py
OFFICE_CONVERTOR_ROOT = 'http://<ip of node background>'\n\u2b07\ufe0f\nOFFICE_CONVERTOR_ROOT = 'http://<ip of node background>:6000'\n
seafevents.conf
[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\n
"},{"location":"upgrade/upgrade_a_cluster/#for-backend-node","title":"For backend node","text":"seahub_settings.py is not needed. But you can leave it unchanged.
seafevents.conf
[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\n
"},{"location":"upgrade/upgrade_a_cluster/#from-63-to-70","title":"From 6.3 to 7.0","text":"No special upgrade operations.
"},{"location":"upgrade/upgrade_a_cluster/#from-62-to-63","title":"From 6.2 to 6.3","text":"In version 6.2.11, the included Django was upgraded. The memcached configuration needed to be upgraded if you were using a cluster. If you upgrade from a version below 6.1.11, don't forget to change your memcache configuration. If the configuration in your seahub_settings.py
is:
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n }\n}\n\nCOMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
Now you need to change to:
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n },\n 'locmem': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n },\n}\nCOMPRESS_CACHE_BACKEND = 'locmem'\n
"},{"location":"upgrade/upgrade_a_cluster/#from-61-to-62","title":"From 6.1 to 6.2","text":"No special upgrade operations.
"},{"location":"upgrade/upgrade_a_cluster/#from-60-to-61","title":"From 6.0 to 6.1","text":"In version 6.1, we upgraded the included ElasticSearch server. The old server listen on port 9500, new server listen on port 9200. Please change your firewall settings.
"},{"location":"upgrade/upgrade_a_cluster/#from-51-to-60","title":"From 5.1 to 6.0","text":"In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
"},{"location":"upgrade/upgrade_a_cluster/#from-v50-to-v51","title":"From v5.0 to v5.1","text":"Because Django is upgraded to 1.8, the COMPRESS_CACHE_BACKEND should be changed
- COMPRESS_CACHE_BACKEND = 'locmem://'\n + COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v44-to-v50","title":"From v4.4 to v5.0","text":"v5.0 introduces some database schema change, and all configuration files (ccnet.conf, seafile.conf, seafevents.conf, seahub_settings.py) are moved to a central config directory.
Perform the following steps to upgrade:
./upgrade/upgrade_4.4_5.0.sh\n
SEAFILE_SKIP_DB_UPGRADE
environmental variable turned on:SEAFILE_SKIP_DB_UPGRADE=1 ./upgrade/upgrade_4.4_5.0.sh\n
After the upgrade, you should see the configuration files has been moved to the conf/ folder.
conf/\n |__ seafile.conf\n |__ seafevent.conf\n |__ seafdav.conf\n |__ seahub_settings.conf\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v43-to-v44","title":"From v4.3 to v4.4","text":"There are no database and search index upgrade from v4.3 to v4.4. Perform the following steps to upgrade:
v4.3 contains no database table change from v4.2. But the old search index will be deleted and regenerated.
A new option COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache' should be added to seahub_settings.py
The secret key in seahub_settings.py need to be regenerated, the old secret key lack enough randomness.
Perform the following steps to upgrade:
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
Maintanence upgrade only needs to download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Start with docker compose up.
"},{"location":"upgrade/upgrade_a_cluster_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"Migrate your configuration for LDAP and OAuth according to here
"},{"location":"upgrade/upgrade_a_cluster_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"If you are using with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_a_cluster_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"If you are using with ElasticSearch, follow the upgrading manual on how to update the configuration.
"},{"location":"upgrade/upgrade_docker/","title":"Upgrade Seafile Docker","text":"For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
For major version upgrade, like from 10.0 to 11.0, see instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-110-to-120","title":"Upgrade from 11.0 to 12.0","text":"Note: If you have a large number of Activity
in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
From Seafile Docker 12.0, we recommend that you use .env
and seafile-server.yml
files for configuration.
mv docker-compose.yml docker-compose.yml.bak\n
"},{"location":"upgrade/upgrade_docker/#download-seafile-120-docker-files","title":"Download Seafile 12.0 Docker files","text":"Download .env, seafile-server.yml and caddy.yml, and modify .env file according to the old configuration in docker-compose.yml.bak
wget -O .env https://manual.seafile.com/12.0/docker/ce/env\nwget https://manual.seafile.com/12.0/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME
The database name of ccnet ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME
The database name of seafile seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME
The database name of seahub seahub_db
JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
wget -O .env https://manual.seafile.com/12.0/docker/pro/env\nwget https://manual.seafile.com/12.0/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/caddy.yml\n
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME
The volume directory of Seafile data /opt/seafile-data
SEAFILE_MYSQL_VOLUME
The volume directory of MySQL data /opt/seafile-mysql/db
SEAFILE_CADDY_VOLUME
The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy
SEAFILE_ELASTICSEARCH_VOLUME
(Only valid for Seafile PE) The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
SEAFILE_MYSQL_DB_USER
The user of MySQL (database
- user
can be found in conf/seafile.conf
) seafile
SEAFILE_MYSQL_DB_PASSWORD
The user seafile
password of MySQL (required) JWT
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1
(required) SEAFILE_SERVER_HOSTNAME
Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL
Seafile server protocol (http or https) http
TIME_ZONE
Time zone UTC
Note
seafile.conf
).INIT_SEAFILE_MYSQL_ROOT_PASSWORD
, INIT_SEAFILE_ADMIN_EMAIL
, INIT_SEAFILE_ADMIN_PASSWORD
), you can remove it in the .env
file.SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80.
Backup the original seafile.nginx.conf file:
cp seafile.nginx.conf seafile.nginx.conf.bak\n
Remove the server listen 80
section:
#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://seafile.example.com$request_uri? permanent;\n# }\n#}\n
Change server listen 443
to 80
:
server {\n#listen 443 ssl;\nlisten 80;\n\n# ssl_certificate /shared/ssl/pkg.seafile.top.crt;\n# ssl_certificate_key /shared/ssl/pkg.seafile.top.key;\n\n# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;\n\n ...\n
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-notification-server","title":"Upgrade notification server","text":"If you has deployed the notification server. The Notification Server is now moved to its own Docker image. You need to redeploy it according to Notification Server document
"},{"location":"upgrade/upgrade_docker/#upgrade-seadoc-from-08-to-10-for-seafile-v120","title":"Upgrade SeaDoc from 0.8 to 1.0 for Seafile v12.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following steps:
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_docker/#remove-seadoc-configs-in-seafilenginxconf-file","title":"Remove SeaDoc configs in seafile.nginx.conf file","text":"If you have deployed SeaDoc older version, you should remove /sdoc-server/
, /socket.io
configs in seafile.nginx.conf file.
# location /sdoc-server/ {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# if ($request_method = 'OPTIONS') {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# return 204;\n# }\n# proxy_pass http://sdoc-server:7070/;\n# proxy_redirect off;\n# proxy_set_header Host $host;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header X-Forwarded-Host $server_name;\n# proxy_set_header X-Forwarded-Proto $scheme;\n# client_max_body_size 100m;\n# }\n# location /socket.io {\n# proxy_pass http://sdoc-server:7070;\n# proxy_http_version 1.1;\n# proxy_set_header Upgrade $http_upgrade;\n# proxy_set_header Connection 'upgrade';\n# proxy_redirect off;\n# proxy_buffers 8 32k;\n# proxy_buffer_size 64k;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header Host $http_host;\n# proxy_set_header X-NginX-Proxy true;\n# }\n
"},{"location":"upgrade/upgrade_docker/#deploy-a-new-seadoc-server","title":"Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc with Seafile.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify
...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:10.0-latest\n ...\n ...\n
to
service:\n ...\n seafile:\n image: seafileltd/seafile-mc:11.0-latest\n ...\n ...\n
It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions:
What's more, you have to migrate configuration for LDAP and OAuth according to here
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#lets-encrypt-ssl-certificate","title":"Let's encrypt SSL certificate","text":"Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate.
If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting.
mv /opt/seafile/shared/ssl /opt/seafile/shared/ssl-bak\n\nmv /opt/seafile/shared/nginx/conf/seafile.nginx.conf /opt/seafile/shared/nginx/conf/seafile.nginx.conf.bak\n
Starting the new container will automatically apply a certificate.
docker compose down\ndocker compose up -d\n
Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect.
docker exec seafile nginx -s reload\n
A cron job inside the container will automatically renew the certificate.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-71-to-80","title":"Upgrade from 7.1 to 8.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-70-to-71","title":"Upgrade from 7.0 to 7.1","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_10.0.x/#enable-notification-server","title":"Enable notification server","text":"The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits:
The notification server works with Seafile syncing client 9.0+ and drive client 3.0+.
Please follow the document to enable notification server
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#memcached-section-in-the-seafileconf-pro-edition-only","title":"Memcached section in the seafile.conf (pro edition only)","text":"If you use storage backend or cluster, make sure the memcached section is in the seafile.conf.
Since version 10.0, all memcached options are consolidated to the one below.
Modify the seafile.conf:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#saml-sso-change-pro-edition-only","title":"SAML SSO change (pro edition only)","text":"The configuration for SAML SSO in Seafile is greatly simplified. Now only three options are needed:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
Please check the new document on SAML SSO
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#rate-control-in-role-settings-pro-edition-only","title":"Rate control in role settings (pro edition only)","text":"Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps:
seahub_settings.py
.ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n ...\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n ...\n },\n 'guest': {\n ...\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n ...\n },\n}\n
seafile-server-latest
directory to make the configuration take effect../seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function.
Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file.
You can use the following command to query the current size of each shard to determine the best number of shards for you:
curl 'http{s}://<es IP>:9200/_cat/shards/repofiles?v'\n
The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation.
Modify the seafevents.conf:
[INDEX FILES]\n...\nshards = 10 # default is 5\n...\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
For Debian 11
su pip3 install future==0.18.* mysqlclient==2.1.* pillow==9.3.* captcha==0.4 django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#upgrade-to-100x","title":"Upgrade to 10.0.x","text":"Stop Seafile-9.0.x server.
Start from Seafile 10.0.x, run the script:
upgrade/upgrade_9.0_10.0.sh\n
If you are using pro edtion, modify memcached option in seafile.conf and SAML SSO configuration if needed.
You can choose one of the methods to upgrade your index data.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-one-reindex-the-old-index-data","title":"Method one, reindex the old index data","text":"1. Download Elasticsearch image:
docker pull elasticsearch:7.17.9\n
Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Start ES docker image:
sudo docker run -d --name es-7.17 -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.17.9\n
PS: ES_JAVA_OPTS
can be adjusted according to your need.
2. Create an index with 8.x compatible mappings:
# create repo_head index\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8?pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n }\n }\n }\n}'\n\n# create repofiles index, number_of_shards is the number of shards, here is set to 5, you can also modify it to the most suitable number of shards\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/?pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : \"5\",\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n
3. Set the refresh_interval
to -1
and the number_of_replicas
to 0
for efficient reindex:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n
4. Use the reindex API to copy documents from the 7.x index into the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repo_head\"\n },\n \"dest\": {\n \"index\": \"repo_head_8\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repofiles\"\n },\n \"dest\": {\n \"index\": \"repofiles_8\"\n }\n}'\n
5. Use the following command to check if the reindex task is complete:
# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\n
6. Reset the refresh_interval
and number_of_replicas
to the values used in the old index:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n
7. Wait for the elasticsearch status to change to green
(or yellow
if it is a single node).
curl 'http{s}://{es server IP}:9200/_cluster/health?pretty'\n
8. Use the aliases API delete the old index and add an alias with the old index name to the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"repo_head_8\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"repofiles_8\", \"alias\": \"repofiles\"}}\n ]\n}'\n
9. Deactivate the 7.17 container, pull the 8.x image and run:
$ docker stop es-7.17\n\n$ docker rm es-7.17\n\n$ docker pull elasticsearch:8.6.2\n\n$ sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.6.2\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"1. Pull Elasticsearch image:
docker pull elasticsearch:8.5.3\n
Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Start ES docker image:
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.5.3\n
2. Modify the seafevents.conf:
[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\n
Restart Seafile server:
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\n
3. Delete old index data
rm -rf /opt/seafile-elasticsearch/data/*\n
4. Create new index data:
$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the seafevents.conf
file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update
.
2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server.
3. Then deactivate the old backend node and the old version of Elasticsearch.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/","title":"Upgrade notes for 11.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-of-user-identity","title":"Change of user identity","text":"Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID.
Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table.
Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#reimplementation-of-ldap-integration","title":"Reimplementation of LDAP Integration","text":"Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase.
LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users.
Benefits of this new implementation:
You need to run migrate_ldapusers.py
script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)
If you use OAuth authentication, the configuration need to be changed a bit.
If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of DISABLE_ADFS_USER_PWD_LOGIN = True
in seahub_settings.py.
Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0.
There are several reasons driving this change:
To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 11.0
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-saml-prerequisites-multi_tenancy-only","title":"New SAML prerequisites (MULTI_TENANCY only)","text":"For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y dnsutils\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#django-csrf-protection-issue","title":"Django CSRF protection issue","text":"Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered.
If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error.
You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem:
CSRF_TRUSTED_ORIGINS = [\"https://<your-domain>\"]\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y python3-dev ldap-utils libldap2-dev\n\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* sqlalchemy==2.0.18 captcha==0.5.* django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"upgrade/upgrade_10.0_11.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows:
# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n
The following configuration items are only for Pro Edition:
# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set SSO_LDAP_USE_SAME_UID = True
:
SSO_LDAP_USE_SAME_UID = True\n
Note, here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR
(not LDAP_UID_ATTR
), in ADFS it is uid
attribute. You need make sure you use the same attribute for the two settings.
Run the following script to migrate users in LDAPImported
to EmailUsers
cd <install-path>/seafile-server-latest\npython3 migrate_ldapusers.py\n
For Seafile docker
docker exec -it seafile /usr/bin/python3 /opt/seafile/seafile-server-latest/migrate_ldapusers.py\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows:
# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\")
. You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/","title":"Upgrade notes for 12.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
Seafile version 12.0 has following major changes:
Configuration changes:
.env
file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env
to be consistant with docker based installation..env
file.ccnet.conf
is removed. Some of its configuration items are moved from .env
file, others are read from items in seafile.conf
with same name.Other changes:
Breaking changes
Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 12.0
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 22.04/24.04
sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.*\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
Note
If you has deployed the Notification Server. The Notification Server should be re-deployed with the same version as Seafile server.
For example:
You can modify .env
in your Notification Server host to re-deploy:
NOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:12.0-latest\n
Restart Notification Server:
docker compose restart\n
Note: If you have a large number of Activity
in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
upgrade/upgrade_11.0_12.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#3-create-the-env-file-in-conf-directory","title":"3) Create the .env
file in conf/ directory","text":"conf/.env
JWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Note: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1
Since seafile 12.0, we use docker to deploy the notification server. Please follow the document of notification server.
Note
Notification server is designed to be work with Docker based deployment. To make it work with Seafile binary package on the same server is, you will need to add Nginx rules for notification server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-seadoc-from-08-to-10","title":"Upgrade SeaDoc from 0.8 to 1.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps:
SeaDoc and Seafile binary package
Deploying SeaDoc and Seafile binary package on the same server is no longer officially supported. You will need to add Nginx rules for SeaDoc server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#delete-sdoc_db","title":"Delete sdoc_db","text":"From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#deploy-a-new-seadoc-server","title":"Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#faq","title":"FAQ","text":"We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/","title":"Upgrade notes for 7.1.x","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#important-release-changes","title":"Important release changes","text":"From 7.1.0 version, Seafile will depend on the Python 3 and is\u00a0not\u00a0compatible\u00a0with\u00a0Python\u00a02.
Therefore you cannot upgrade directly from Seafile 6.x.x to 7.1.x.
If your current version of Seafile is not 7.0.x, you must first download the 7.0.x installation package and upgrade to 7.0.x before performing the subsequent operations.
To support both Python 3.6 and 3.7, we no longer bundle python libraries with Seafile package. You need to install most of the libraries by your own as bellow.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#deploy-python3","title":"Deploy Python3","text":"Note, you should install Python libraries system wide using root user or sudo mode.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#seafile-ce","title":"Seafile-CE","text":"sudo apt-get install python3 python3-setuptools python3-pip memcached libmemcached-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#seafile-pro","title":"Seafile-Pro","text":"apt-get install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#upgrade-to-71x","title":"Upgrade to 7.1.x","text":"upgrade/upgrade_7.0_7.1.sh\n
rm -rf /tmp/seahub_cache # Clear the Seahub cache files from disk.\n# If you are using the Memcached service, you need to restart the service to clear the Seahub cache.\nsystemctl restart memcached\n
After Seafile 7.1.x, Seafdav does not support Fastcgi, only Wsgi.
This means that if you are using Seafdav functionality and have deployed Nginx or Apache reverse proxy. You need to change Fastcgi to Wsgi.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#for-nginx","title":"For Nginx","text":"For Seafdav, the configuration of Nginx is as follows:
.....\n location /seafdav {\n proxy_pass http://127.0.0.1:8080/seafdav;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#for-apache","title":"For Apache","text":"For Seafdav, the configuration of Apache is as follows:
......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#builtin-office-file-preview","title":"Builtin office file preview","text":"The implementation of builtin office file preview has been changed. You should update your configuration according to:
https://download.seafile.com/published/seafile-manual/deploy_pro/office_documents_preview.md#user-content-Version%207.1+
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#if-you-are-using-ceph-backend","title":"If you are using Ceph backend","text":"If you are using Ceph storage backend, you need to install new python library.
On Debian/Ubuntu (Seafile 7.1+):
sudo apt-get install python3-rados\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#login-page-customization","title":"Login Page Customization","text":"If you have customized the login page or other html pages, as we have removed some old javascript libraries, your customized pages may not work anymore. Please try to re-customize based on the newest version.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#user-name-encoding-issue-with-shibboleth-login","title":"User name encoding issue with Shibboleth login","text":"Note, the following patch is included in version pro-7.1.8 and ce-7.1.5 already.
We have two customers reported that after upgrading to version 7.1, users login via Shibboleth single sign on have a wrong name if the name contains a special character. We suspect it is a Shibboleth problem as it does not sending the name in UTF-8 encoding to Seafile. (https://issues.shibboleth.net/jira/browse/SSPCPP-2)
The solution is to modify the code in seahub/thirdpart/shibboleth/middleware.py:
158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname\n\nto \n\n158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname.encode(\"iso-8859-1\u201d).decode('utf8')\n
If you have this problem too, please let us know.
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#faq","title":"FAQ","text":""},{"location":"upgrade/upgrade_notes_for_7.1.x/#sql-error-during-upgrade","title":"SQL Error during upgrade","text":"The upgrade script will try to create a missing table and remove an used index. The following SQL errors are jus warnings and can be ignored:
[INFO] updating seahub database...\n/opt/seafile/seafile-server-7.1.1/seahub/thirdpart/pymysql/cursors.py:170: Warning: (1050, \"Table 'base_reposecretkey' already exists\")\n result = self._query(query)\n[WARNING] Failed to execute sql: (1091, \"Can't DROP 'drafts_draft_origin_file_uuid_7c003c98_uniq'; check that column/key exists\")\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#internal-server-error-after-upgrade-to-version-71","title":"Internal server error after upgrade to version 7.1","text":"Please check whether the seahub process is running in your server. If it is running, there should be an error log in seahub.log for internal server error.
If seahub process is not running, you can modify\u00a0conf/gunicorn.conf, change\u00a0daemon = True
\u00a0to\u00a0daemon = False
\u00a0, then run ./seahub.sh again. If there are missing Python dependencies, the error will be reported in the terminal.
The most common issue is that you use an old memcache configuration that depends on python-memcache. The new way is
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache'\n
The old way is
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/","title":"Upgrade notes for 8.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#important-release-changes","title":"Important release changes","text":"From 8.0, ccnet-server component is removed. But ccnet.conf is still needed.
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#install-new-python-libraries","title":"Install new Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
apt-get install libmysqlclient-dev\n\nsudo pip3 install -U future mysqlclient sqlalchemy==1.4.3\n
apt-get install default-libmysqlclient-dev \n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\n
yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future\nsudo pip3 install mysqlclient==2.0.1 sqlalchemy==1.4.3\n
yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#change-shibboleth-setting","title":"Change Shibboleth Setting","text":"If you are using Shibboleth and have configured EXTRA_MIDDLEWARE_CLASSES
EXTRA_MIDDLEWARE_CLASSES = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\n
please change it to EXTRA_MIDDLEWARE
EXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\n
As support for old-style middleware using settings.MIDDLEWARE_CLASSES
is removed since django 2.0.
Start from Seafile 7.1.x, run the script:
upgrade/upgrade_7.1_8.0.sh\n
Start Seafile-8.0.x server.
These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#important-release-changes","title":"Important release changes","text":"9.0 version includes following major changes:
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
sudo pip3 install pycryptodome==3.12.0 cffi==1.14.0\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"Start from Seafile 9.0.x, run the script:
upgrade/upgrade_8.0_9.0.sh\n
Start Seafile-9.0.x server.
If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows
Download ElasticSearch image
docker pull elasticsearch:7.16.2\n
Create a new folder to store ES data and give the folder permissions
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n
Delete old index data
rm -rf /opt/seafile/pro-data/search/data/*\n
Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP (use 127.0.0.1 if deployed locally)\nes_port = 9200\n
Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps
The detailed process is as follows
Download ElasticSearch image:
docker pull elasticsearch:7.16.2\n
PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows
mkdir -p /opt/seafile-elasticsearch/data \n
Move original data to the new folder and give the folder permissions
mv /opt/seafile/pro-data/search/data/* /opt/seafile-elasticsearch/data/\nchmod -R 777 /opt/seafile-elasticsearch/data/\n
Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n
Note:ES_JAVA_OPTS
can be adjusted according to your need.
Create an index with 7.x compatible mappings.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head?include_type_name=false&pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"text\",\n \"index\" : false\n }\n }\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/?include_type_name=false&pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : 5,\n \"number_of_replicas\" : 1,\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n
Set the refresh_interval
to -1
and the number_of_replicas
to 0
for efficient reindexing:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n
Use the reindex API to copy documents from the 5.x index into the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repo_head\",\n \"type\": \"repo_commit\"\n },\n \"dest\": {\n \"index\": \"new_repo_head\",\n \"type\": \"_doc\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repofiles\",\n \"type\": \"file\"\n },\n \"dest\": {\n \"index\": \"new_repofiles\",\n \"type\": \"_doc\"\n }\n}'\n
Reset the refresh_interval
and number_of_replicas
to the values used in the old index.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n
Wait for the index status to change to green
.
curl http{s}://{es server IP}:9200/_cluster/health?pretty\n
Use the aliases API delete the old index and add an alias with the old index name to the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"new_repo_head\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"new_repofiles\", \"alias\": \"repofiles\"}}\n ]\n}'\n
After reindex, modify the configuration in Seafile.
Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP\nes_port = 9200\n
Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update
, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.
key_id
is required to authenticate you to S3. You can find the key_id
in the "security credentials" section on your AWS account page or from your storage provider.key
key
is required to authenticate you to S3. You can find the key
in the "security credentials" section on your AWS account page.key
is required to authenticate you to S3. You can find the key
in the "security credentials" section on your AWS account page or from your storage provider.use_v4_signature