Upload with a presigned URL
This article provides guidance for webapps wanting to allow users to upload videos and other assets. We recommend to generate a presigned URL server-side that allows a user to directly upload a file into your cloud storage without having to pass the file through your server.
You can set constraints such as maximal file size and file type, apply rate limiting, require authentication, and predefine the storage location.
Why use presigned URL?
The traditional way of implementing a file upload would be to let the client upload the file onto a server, which then stores the file on disk or forwards the upload to cloud storage. While this approach works, it's not ideal due to several reasons.
- Reduce load: If many clients happen to upload big files on the same server, this server can get slow or even break down under the load. With the presign workflow, the server only needs to create presign URLs, which reduces server load than handling file transfers.
- Reduce spam: To prevent your users using your upload feature as free hosting space, you can deny them a presigned URL if they step over your allowance.
- Data safety: Since a lot of hosting solutions today are ephemeral or serverless, files should not be stored on them. There is no guarantee the files will still exist after a server restart and you might run out of disk space.
AWS Example
This example assumes user uploads are stored in S3. For other frontends
First, accept a file in your frontend, for example using <input type="file">
. You should get a File
, from which you can determine the content type and content length:
App.tsxts
constcontentType =file .type || 'application/octet-stream';constarrayBuffer = awaitfile .arrayBuffer ();constcontentLength =arrayBuffer .byteLength ;
App.tsxts
constcontentType =file .type || 'application/octet-stream';constarrayBuffer = awaitfile .arrayBuffer ();constcontentLength =arrayBuffer .byteLength ;
This example uses @aws-sdk/s3-request-presigner
and the AWS SDK imported from @remotion/lambda
. By calling the function below, two URLs are generated:
presignedUrl
is a URL to which the file can be uploaded toreadUrl
is the URL from which the file can be read from.
generate-presigned-url.tstsx
import {getSignedUrl } from '@aws-sdk/s3-request-presigner';import {AwsRegion ,getAwsClient } from '@remotion/lambda/client';export constgeneratePresignedUrl = async (contentType : string,contentLength : number,expiresIn : number,bucketName : string,region :AwsRegion ,):Promise <{presignedUrl : string;readUrl : string}> => {if (contentLength > 1024 * 1024 * 200) {throw newError (`File may not be over 200MB. Yours is ${contentLength } bytes.`,);}const {client ,sdk } =getAwsClient ({region :process .env .REMOTION_AWS_REGION asAwsRegion ,service : 's3',});constkey =crypto .randomUUID ();constcommand = newsdk .PutObjectCommand ({Bucket :bucketName ,Key :key ,ACL : 'public-read',ContentLength :contentLength ,ContentType :contentType ,});constpresignedUrl = awaitgetSignedUrl (client ,command , {expiresIn ,});// The location of the asset after the uploadconstreadUrl = `https://${bucketName }.s3.${region }.amazonaws.com/${key }`;return {presignedUrl ,readUrl };};
generate-presigned-url.tstsx
import {getSignedUrl } from '@aws-sdk/s3-request-presigner';import {AwsRegion ,getAwsClient } from '@remotion/lambda/client';export constgeneratePresignedUrl = async (contentType : string,contentLength : number,expiresIn : number,bucketName : string,region :AwsRegion ,):Promise <{presignedUrl : string;readUrl : string}> => {if (contentLength > 1024 * 1024 * 200) {throw newError (`File may not be over 200MB. Yours is ${contentLength } bytes.`,);}const {client ,sdk } =getAwsClient ({region :process .env .REMOTION_AWS_REGION asAwsRegion ,service : 's3',});constkey =crypto .randomUUID ();constcommand = newsdk .PutObjectCommand ({Bucket :bucketName ,Key :key ,ACL : 'public-read',ContentLength :contentLength ,ContentType :contentType ,});constpresignedUrl = awaitgetSignedUrl (client ,command , {expiresIn ,});// The location of the asset after the uploadconstreadUrl = `https://${bucketName }.s3.${region }.amazonaws.com/${key }`;return {presignedUrl ,readUrl };};
Explanation:
- First, the upload request gets checked for constraints. In this example, we reject uploads that are over 200MB. You could add more constraints or add rate-limiting.
- The AWS SDK gets imported using getAwsClient(). If you don't use Remotion Lambda, install the
@aws-sdk/client-s3
package directly. - A UUID gets used as the filename to avoid name clashes.
- Finally, the presigned URL and output URL get calculated and returned.
Next.js example code
Here is a sample snippet for the Next.js App Router.
The endpoint is available under api/upload/route.ts
.
app/api/upload/route.tstsx
import {NextResponse } from 'next/server';import {getSignedUrl } from '@aws-sdk/s3-request-presigner';import {AwsRegion ,getAwsClient } from '@remotion/lambda/client';constgeneratePresignedUrl = async ({contentType ,contentLength ,expiresIn ,bucketName ,region ,}: {contentType : string;contentLength : number;expiresIn : number;bucketName : string;region :AwsRegion ;}):Promise <{presignedUrl : string;readUrl : string}> => {if (contentLength > 1024 * 1024 * 200) {throw newError (`File may not be over 200MB. Yours is ${contentLength } bytes.`,);}const {client ,sdk } =getAwsClient ({region :process .env .REMOTION_AWS_REGION asAwsRegion ,service : 's3',});constkey =crypto .randomUUID ();constcommand = newsdk .PutObjectCommand ({Bucket :bucketName ,Key :key ,ACL : 'public-read',ContentLength :contentLength ,ContentType :contentType ,});constpresignedUrl = awaitgetSignedUrl (client ,command , {expiresIn ,});// The location of the asset after the uploadconstreadUrl = `https://${bucketName }.s3.${region }.amazonaws.com/${key }`;return {presignedUrl ,readUrl };};export constPOST = async (request :Request ) => {if (!process .env .REMOTION_AWS_BUCKET_NAME ) {throw newError ('REMOTION_AWS_BUCKET_NAME is not set');}if (!process .env .REMOTION_AWS_REGION ) {throw newError ('REMOTION_AWS_REGION is not set');}constjson = awaitrequest .json ();if (!Number .isFinite (json .size )) {throw newError ('size is not a number');}if (typeofjson .contentType !== 'string') {throw newError ('contentType is not a string');}const {presignedUrl ,readUrl } = awaitgeneratePresignedUrl ({contentType :json .contentType ,contentLength :json .size ,expiresIn : 60 * 60 * 24 * 7,bucketName :process .env .REMOTION_AWS_BUCKET_NAME as string,region :process .env .REMOTION_AWS_REGION asAwsRegion ,});returnNextResponse .json ({presignedUrl ,readUrl });};
app/api/upload/route.tstsx
import {NextResponse } from 'next/server';import {getSignedUrl } from '@aws-sdk/s3-request-presigner';import {AwsRegion ,getAwsClient } from '@remotion/lambda/client';constgeneratePresignedUrl = async ({contentType ,contentLength ,expiresIn ,bucketName ,region ,}: {contentType : string;contentLength : number;expiresIn : number;bucketName : string;region :AwsRegion ;}):Promise <{presignedUrl : string;readUrl : string}> => {if (contentLength > 1024 * 1024 * 200) {throw newError (`File may not be over 200MB. Yours is ${contentLength } bytes.`,);}const {client ,sdk } =getAwsClient ({region :process .env .REMOTION_AWS_REGION asAwsRegion ,service : 's3',});constkey =crypto .randomUUID ();constcommand = newsdk .PutObjectCommand ({Bucket :bucketName ,Key :key ,ACL : 'public-read',ContentLength :contentLength ,ContentType :contentType ,});constpresignedUrl = awaitgetSignedUrl (client ,command , {expiresIn ,});// The location of the asset after the uploadconstreadUrl = `https://${bucketName }.s3.${region }.amazonaws.com/${key }`;return {presignedUrl ,readUrl };};export constPOST = async (request :Request ) => {if (!process .env .REMOTION_AWS_BUCKET_NAME ) {throw newError ('REMOTION_AWS_BUCKET_NAME is not set');}if (!process .env .REMOTION_AWS_REGION ) {throw newError ('REMOTION_AWS_REGION is not set');}constjson = awaitrequest .json ();if (!Number .isFinite (json .size )) {throw newError ('size is not a number');}if (typeofjson .contentType !== 'string') {throw newError ('contentType is not a string');}const {presignedUrl ,readUrl } = awaitgeneratePresignedUrl ({contentType :json .contentType ,contentLength :json .size ,expiresIn : 60 * 60 * 24 * 7,bucketName :process .env .REMOTION_AWS_BUCKET_NAME as string,region :process .env .REMOTION_AWS_REGION asAwsRegion ,});returnNextResponse .json ({presignedUrl ,readUrl });};
This is how you can call it in the frontend:
Uploader.tsxtsx
constpresignedResponse = awaitfetch ('/api/upload', {method : 'POST',body :JSON .stringify ({size :file .size ,contentType :file .type ,}),});constjson = (awaitpresignedResponse .json ()) as {presignedUrl : string;readUrl : string;};
Uploader.tsxtsx
constpresignedResponse = awaitfetch ('/api/upload', {method : 'POST',body :JSON .stringify ({size :file .size ,contentType :file .type ,}),});constjson = (awaitpresignedResponse .json ()) as {presignedUrl : string;readUrl : string;};
This example does not implement any rate limiting or authentication.
Performing the Uploading
Using fetch()
Send the presigned URL back to the client. Afterwards, you can now perform an upload using the built-in fetch()
function:
upload-with-fetch.tsts
awaitfetch (presignedUrl , {method : 'PUT',body :arrayBuffer ,headers : {'content-type':contentType ,},});
upload-with-fetch.tsts
awaitfetch (presignedUrl , {method : 'PUT',body :arrayBuffer ,headers : {'content-type':contentType ,},});
Tracking the upload progress
As of October 2024, if you need to track the progress of the upload, you need to use XMLHTTPRequest
.
upload-with-progress.tsts
export typeUploadProgress = {progress : number;loadedBytes : number;totalBytes : number;};export typeOnUploadProgress = (options :UploadProgress ) => void;export constuploadWithProgress = ({file ,url ,onProgress ,}: {file :File ;url : string;onProgress :OnUploadProgress ;}):Promise <void> => {return newPromise ((resolve ,reject ) => {constxhr = newXMLHttpRequest ();xhr .open ('PUT',url );xhr .upload .onprogress = function (event ) {if (event .lengthComputable ) {onProgress ({progress :event .loaded /event .total ,loadedBytes :event .loaded ,totalBytes :event .total ,});}};xhr .onload = function () {if (xhr .status === 200) {resolve ();} else {reject (newError (`Upload failed with status: ${xhr .status }`));}};xhr .onerror = function () {reject (newError ('Network error occurred during upload'));};xhr .setRequestHeader ('content-type',file .type );xhr .send (file );});};
upload-with-progress.tsts
export typeUploadProgress = {progress : number;loadedBytes : number;totalBytes : number;};export typeOnUploadProgress = (options :UploadProgress ) => void;export constuploadWithProgress = ({file ,url ,onProgress ,}: {file :File ;url : string;onProgress :OnUploadProgress ;}):Promise <void> => {return newPromise ((resolve ,reject ) => {constxhr = newXMLHttpRequest ();xhr .open ('PUT',url );xhr .upload .onprogress = function (event ) {if (event .lengthComputable ) {onProgress ({progress :event .loaded /event .total ,loadedBytes :event .loaded ,totalBytes :event .total ,});}};xhr .onload = function () {if (xhr .status === 200) {resolve ();} else {reject (newError (`Upload failed with status: ${xhr .status }`));}};xhr .onerror = function () {reject (newError ('Network error occurred during upload'));};xhr .setRequestHeader ('content-type',file .type );xhr .send (file );});};