We generate thumbnails of a users current canvas in the browser and store them in S3. Previously we sent these to the server as a base64 string and then uploaded it to S3, but with pre-signed URLS we could do better.

Running through the server worked but sometimes ran slowly or failed with timeouts. The traditional solution would be to take the file upload, store it locally and then send it to S3 later in a cronjob. I don’t like this though because it adds complexity for ever adding another server or showing the thumbnail on potentially the next request.

The better choice is using pre-signed requests in AWS S3 (and its alternatives). These allow you to create a request on your backend for an upload or download to a private bucket and send it to the user as a URL to fulfill. You get to keep your secret keys safe and avoid proxying everything through your own backend. It really the same as the temporary URLs you may already be used to when serving private files.

Backend

The AWS S3 SDKs are all pretty good and make this easy as - simply prepare a request with your Bucket and file name, make it pre-signed then send it away:

$filename = 'Your/File/Name.png';

// Get your S3 client however you need to
// For Laravel, you can get the real S3 Client instance with:
// $s3Client = Storage::disk(config('filesystems.cloud'))->getClient();
        
$command = $s3Client->getCommand('PutObject', [
	// For Laravel use something like `config('filesystems.disks.s3.bucket')`
	'Bucket' => 'your-bucket-name',
	'Key' => $filename,
	'MetaData' => [],
	'ContentType' => 'image/png'
]);

$request = $s3Client->createPresignedRequest($command, '+20 minutes');

$presigned_url = (string) $request->getUri();

You can see guides on how to make pre-signed URLs in Python and JavaScript as well.

Get that URL to your frontend somehow, I bundled it in with the response from saving a scene:

// For Laravel I used `return response()->json([ /* The array below */ ]);`
return [
	'scene' => [
		...
	],
	'presigned_url' => $presigned_url
];

Frontend

I have a <canvas> element so can choose between toBlob or toDataUrl - lets go with the better choice of blob to save converting things around.

We get our blob from our <canvas> element like this:

_renderer.domElement.toBlob(canvasBlob => uploadRender(canvasBlob));

In the above example, _renderer is an instance of WebGLRenderer from the wonderful Three.js and domElement is a <canvas> element which you can read about here.

// Get the `presigned_url` from your backend somehow

return axios.put(
	presigned_url,
	canvasBlob,
	// base64 alternative
	// this.b64toBlob(renderDataUrl.split(',')[1])
	{
		headers: {
			'Content-Type': 'image/png'
		}
	}
);

You can see an example of it all together and wrapper in a Promise here.

// Get the canvas in Three.js with `renderer.domElement`
// Get the presigned URL from your server with another request

/**
 * @param {HTMLCanvasElement} cavnas
 * @param {string} presigned_url
 * @returns {Promise} axios put promise
**/
function saveRender(canvas, presigned_url){
	return new Promise((resolve, reject) => {
		canvas.toBlob(canvasBlob => {
			if (canvasBlob == null){
				reject();
				return;
			}

			resolve(
				axios.put(
					presigned_url,
					canvasBlob,
					{
						headers: {
							'Content-Type': 'image/png'
						}
					}
				)
			)
		})
	});
}

S3 Configuration

No work with AWS goes without having to edit some settings. So let’s setup CORS support for the new PUT requests so that our frontend can send requests to S3 without hitting security walls.

In your S3 Console, navigate to the bucket and open the Permissions tab. At the bottom you will want to edit the Cross-origin resource sharing (CORS) to be something like this:

[
    {
        "AllowedHeaders": [
            "Content-Type"
        ],
        "AllowedMethods": [
            "GET",
            "PUT"
        ],
        "AllowedOrigins": [
            "https://www.your-domain.com",
        ],
        "ExposeHeaders": []
    }
]

Obviously changing your-domain to whatever you need.

But I wanted base64

Terrain Tinker had used base64 when passing files through to the server first so that I didn’t have to deal with blobs - so here is a snippet you can use to upload base64 data URLs instead of blobs:

/**
	 * Convert a base64 string in a Blob according to the data and contentType.
	 * 
	 * @param b64Data {String} Pure base64 string without contentType
	 * @param contentType {String} the content type of the file i.e (image/jpeg - image/png - text/plain)
	 * @param sliceSize {Int} SliceSize to process the byteCharacters
	 * @see http://stackoverflow.com/questions/16245767/creating-a-blob-from-a-base64-string-in-javascript
	 * @return Blob
	 */
function b64toBlob(b64Data, contentType, sliceSize) {
	contentType = contentType || '';
	sliceSize = sliceSize || 512;
	var byteCharacters = atob(b64Data);
	var byteArrays = [];
	for (var offset = 0; offset < byteCharacters.length; offset += sliceSize) {
		var slice = byteCharacters.slice(offset, offset + sliceSize);
		var byteNumbers = new Array(slice.length);
		for (var i = 0; i < slice.length; i++) {
			byteNumbers[i] = slice.charCodeAt(i);
		}
		var byteArray = new Uint8Array(byteNumbers);
		byteArrays.push(byteArray);
	}
	var blob = new Blob(byteArrays, {type: contentType});
	return blob;
}

You can see an example of how to use it in the Frontend section as a comment. You can get the Data URL from a <canvas> with let b64String = domElement.toDataURL(); - no promises or callbacks needed.

Bye

Tada, your frontend now handles uploading to S3 almost all on its own. Your server has less load and you hopefully feel better about your code. Or at least I did.