Building a Custom YouTube Upload Pipeline From Scratch
If you are a developer who wants full control over every aspect of your YouTube publishing workflow, you do not need a third-party SaaS tool managing your credentials and charging monthly fees. The YouTube Data API v3 gives you everything required to build a custom upload pipeline tailored to your exact requirements and workflow preferences. Here is how to build one from the ground up.
Architecture Overview
Video Files + Metadata JSON
|
v
Pipeline Worker (Node.js)
|
+----+----+
| |
v v
SQLite YouTube API v3
Job Queue (upload, thumbnail, playlist)
The pipeline watches a directory for new video files paired with metadata JSON files, queues upload jobs in a SQLite database for durability, and processes them sequentially with full error handling and retry logic. SQLite serves as the job queue because it is zero-configuration, crash-safe, and does not require running a separate database server.
Step 1: OAuth Setup
Create a Google Cloud project, enable the YouTube Data API v3, and create OAuth 2.0 credentials for a "Desktop application" client type. Run the authorization flow once interactively to get a refresh token that enables all future uploads without browser interaction:
const { authenticate } = require('@google-cloud/local-auth');
const auth = await authenticate({
keyfilePath: './client_secret.json',
scopes: ['https://www.googleapis.com/auth/youtube.upload'],
});
// Save auth.credentials.refresh_token to secure storage
// This token enables all future uploads without user interaction
Store the refresh token in an environment variable or encrypted configuration file. Every subsequent upload uses this token to authenticate silently. The token does not expire unless you explicitly revoke it or change your Google account password.
Step 2: Job Queue
A SQLite-backed job queue tracks the state of each upload through its lifecycle:
CREATE TABLE upload_jobs (
id INTEGER PRIMARY KEY,
video_path TEXT NOT NULL,
metadata_path TEXT NOT NULL,
status TEXT DEFAULT 'pending',
youtube_id TEXT,
error TEXT,
attempts INTEGER DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
completed_at DATETIME
);
New files added to the watch directory create new rows with 'pending' status. A worker process picks up pending jobs, sets them to 'uploading', processes them, and marks them 'complete' or 'failed'. This design means the pipeline can crash and restart without losing track of where it was -- pending and failed jobs are automatically retried on restart.
Step 3: Upload Worker
The worker handles one upload at a time to avoid quota issues, with retry logic for transient failures:
async function processJob(job) {
const meta = JSON.parse(fs.readFileSync(job.metadata_path));
try {
const res = await youtube.videos.insert({
part: ['snippet', 'status'],
requestBody: {
snippet: {
title: meta.title,
description: meta.description,
tags: meta.tags
},
status: {
privacyStatus: meta.privacy || 'public'
},
},
media: { body: fs.createReadStream(job.video_path) },
});
// Mark successful, store YouTube video ID
await db.run(
'UPDATE upload_jobs SET status=?, youtube_id=?, completed_at=datetime("now") WHERE id=?',
['complete', res.data.id, job.id]
);
} catch (err) {
const attempts = job.attempts + 1;
const status = attempts >= 3 ? 'failed' : 'pending';
await db.run(
'UPDATE upload_jobs SET status=?, error=?, attempts=? WHERE id=?',
[status, err.message, attempts, job.id]
);
}
}
Step 4: File Watcher
Use chokidar to monitor the output directory for new video files. When a new .mp4 file appears alongside a companion .json metadata file, create a job in the queue automatically:
const chokidar = require('chokidar');
chokidar.watch('./output/*.mp4').on('add', async (filePath) => {
const metaPath = filePath.replace('.mp4', '.json');
if (fs.existsSync(metaPath)) {
await db.run(
'INSERT INTO upload_jobs (video_path, metadata_path) VALUES (?, ?)',
[filePath, metaPath]
);
}
});
Step 5: Post-Upload Actions
After a successful upload returns a video ID, chain additional API calls to complete the publishing process:
youtube.thumbnails.set()-- upload a custom thumbnail image using the video ID from the upload responseyoutube.playlistItems.insert()-- add the video to one or more playlists for organization and session time improvementyoutube.captions.insert()-- upload an SRT file for search indexing and accessibility, separate from burned-in captions
Each of these post-upload calls costs 50 quota units, so the total per video is about 1,750 units (1,600 for upload + 150 for post-upload actions).
VidNo implements a similar architecture internally with the same file-watching and job-queue patterns. The key difference is that VidNo handles the entire production chain from raw recording through to upload, while building your own custom pipeline gives you fine-grained control over each individual step in the process. For developers who want that control, the YouTube API provides everything you need.