Application Settings Page#
Accessible at /settings/application/
of your Tube Archivist instance, this page holds all of the general application configuration settings (minus configuration of the scheduler).
Subscriptions#
Settings related to the channel management. Disable shorts or streams by setting their page size to 0 (zero). Can also be configured on a per channel basis.
Defines how many pages will get analyzed by Tube Archivist each time you click on Rescan Subscriptions. The default page size used by yt-dlp is 50, which is also the recommended value to set here. Any value higher will slow down the rescan process, for example if you set the value to 51, that means yt-dlp will have to go through 2 pages of results instead of 1 and by that doubling the time that process takes.
Also see the FAQ Why does subscribing to a channel not download the complete channel?
Video Page Size#
Regular videos off a channel.
Live Page Size#
Same as above, but for a channel's live streams. Disable by setting to 0.
Shorts page Size#
Same as above, but for a channel's shorts videos. Disable by setting to 0.
Auto Start#
This will prioritize and automatically start downloading videos from your subscriptions over regular videos added to the download queue.
Downloads#
Settings related to the download process.
Download Speed Limit#
Set your download speed limit in KB/s. This will pass the option --limit-rate
to yt-dlp.
Throttled Rate Limit#
Restart download if the download speed drops below this value in KB/s. This will pass the option --throttled-rate
to yt-dlp. Using this option might have a negative effect if you have an unstable or slow internet connection.
Sleep Interval#
Time in seconds to sleep between repeated requests to YouTube. It is recommended to set this to at least 10 seconds to avoid throtteling and getting blocked. The value set will be applied with a random variation of +/- 50%, e.g. a sleep interval of 10 seconds, will delay requests by between 5 and 15 seconds. This is to mimic regular user traffic.
Auto Delete Watched Videos#
Automatically delete videos marked as watched after selected days. If activated, checks your videos after download task is finished. Auto deleted videos get marked as ignored and won't get added again in future rescans.
Download Format#
Additional settings passed to yt-dlp.
Format#
This controls which streams get downloaded and is equivalent to passing --format
to yt-dlp. Use one of the recommended configurations or review the documentation for yt-dlp. Please note: The option --merge-output-format mp4
is automatically passed to yt-dlp to guarantee browser compatibility. Similar to that, --check-formats
is passed as well to check that the selected formats are actually downloadable.
Format Sort#
This allows you to change how yt-dlp sorts formats by passing --format-sort
to yt-dlp. Refer to the documentation to see what you can pass here. Be aware that some codecs might not be compatible with your browser of choice.
Extractor Language#
Some channels provide translated video titles and descriptions. Add the two letter ISO language code to set your prefered default language. This will only have an effect if the uploader adds translations. Not all language codes are supported, see the documentation (the lang
section) for more details.
Embed Metadata#
This saves the available tags directly into the media file by passing --embed-metadata
to yt-dlp.
Embed Thumbnail#
This saves the thumbnail into the media file by passing --embed-thumbnail
to yt-dlp.
Subtitles#
Subtitle Language#
Select the subtitle language you like to download. Add a comma separated list for multiple languages. For Chinese you must specify zh-Hans
or zh-Hant
, specifying "zh" is invalid, otherwise the subtitle won't download successfully.
Enable Auto Generated#
This will fallback to from YouTube auto generated subtitles if subtitles from the uploader are not available. Auto generated subtitles are usually less accurate, particularly for auto translated tracks.
Enable Index#
Enabling subtitle indexing will add the lines to Elasticsearch and will make subtitles searchable. This will increase the index size and is not recommended on low-end hardware.
Comments#
Index Comments#
Set your configuration for downloading and indexing comments. This takes the same values as documented in the max_comments
section for the youtube extractor of yt-dlp. Add, without spaces, between the four different fields: max-comments,max-parents,max-replies,max-replies-per-thread.
Example:
all,100,all,30
: Get 100 max-parents and 30 max-replies-per-thread.1000,all,all,50
: Get a total of 1000 comments over all, 50 replies per thread.
Comment sort method#
Change sort method between top or new. The default is top, as decided by YouTube.
- The Refresh Metadata background task will get comments from your already archived videos, spreading the requests out over time.
Archiving comments is slow as only a few comments get returned per request with yt-dlp. Choose your configuration above wisely. Tube Archivist will download comments after the download queue finishes. Your videos will already be available while the comments are getting downloaded.
Cookie#
Cookie Expiry
Using cookies can have unintended consequences. Multiple users have reported that their account got flagged and cookies will expire within a few hours. It appears that YT has some detection mechanism that will invalidate your cookie if it's being used outside of a browser. That is happening server side on YT. If you are affected, you might be better off to not use this functionality.
Importing your YouTube Cookie into Tube Archivist allows yt-dlp to bypass age restrictions, gives access to private videos and your Watch Later or Liked Videos playlists.
Security concerns#
Cookies are used to store your session and contain your access token to your Google account. This information can be used to take over your account. Treat that data with utmost care, as you would any other password or credential. Tube Archivist stores your cookie in Redis and will automatically append it to yt-dlp for every request.
Auto import#
Easiest way to import your cookie is to use the Tube Archivist Companion browser extension for Firefox and Chrome.
Manual Update#
Alternatively, you can also manually import your cookie into Tube Archivist. Export your cookie as a Netscape formatted text file and paste the content into the text field.
- There are various tools out there that allow you to export cookies from your browser. This project doesn't make any specific recommendations.
- Once imported, a Validate Cookie File button will show, where you can confirm if your cookie is working or not.
- A cookie is considered as valid if yt-dlp is able to access your private Liked Videos playlist.
Use your cookie#
Once imported, in addition to the advantages above, your Watch Later and Liked Videos playlists become a regularly accessible playlist that you can download and subscribe to like any other playlist.
Limitation#
There is only one cookie per Tube Archivist instance. This will be shared between all users.
PO Token#
Also known as proof of origin token, this is a token required in some cases by YT to validate the requests. See the wiki on the yt-dlp repo with more info, particularly the PO Token Guide page.
Integrations#
All third party integrations of Tube Archivist will always be opt in.
API Token#
Your access token for the Tube Archivist API.
ReturnYoutubeDislike#
This will return dislikes and average ratings for each video by integrating with the API from returnyoutubedislike.com.
SponsorBlock#
Using SponsorBlock to retrieve timestamps for, and skip, sponsored content. If a video doesn't have timestamps, or has unlocked timestamps, use the browser addon to contribute to this excellent project. Can also be activated and deactivated on a per channel overwrite.
Cast#
As Cast doesn't support authentication for static files, you'll also need to set DISABLE_STATIC_AUTH
to disable authentication for your static files.
Enabling this integration will embed an additional third-party JS library from Google.
Requirements:
- HTTPS: To use the cast integration, HTTPS needs to be enabled. This can be done using a reverse proxy. This is a requirement from Google, as communication to the casting device is required to be encrypted, but the content itself is not.
- Supported Browser: A supported browser is required for this integration, such as Google Chrome. Other browsers, especially Chromium-based browsers, may support casting by enabling it in the settings.
- Subtitles: Subtitles are supported, however they do not work out of the box and require additional configuration. Due to requirements by Google, to use subtitles you need additional headers which will need to be configured in your reverse proxy. See this page for the specific requirements.
- You need the following headers:
Content-Type
,Accept-Encoding
, andRange
. Note that the last two headers,Accept-Encoding
andRange
, are additional headers that you may not have needed previously. - Wildcards "*" can not be used for the
Access-Control-Allow-Origin
header. If the page has protected media content, it must use a domain instead of a wildcard.
Snapshots#
Info
This will make a snapshot of your metadata index only. No media files or additional configuration variables you have set on the settings page will be backed up.
System snapshots will automatically make daily snapshots of the Elasticsearch index. The task will start at 12pm your local time. Snapshots are deduplicated, meaning that each snapshot will only have to backup changes since the last snapshot. Old snapshots will automatically get deleted after 30 days.
- Create snapshot now: Will start the snapshot process now and outside of the regular daily schedule.
- Restore: Restore your index to that point in time. Select one of the available snapshots to restore from.