About CloudFlare's Durable ObjectsPublished 6 months ago by sai @ pretzelbox
I've been low-key interested in CloudFlare's Durable Objects for a while now. The entire idea of compute + storage at the edge sounded really appealing when I first heard of it.
Later, when I tried it for the first (and only) time, I ran into some issues using DOs after which, I dropped the idea of building something around DOs.
There is a reason for this error though.
I was (and am) building PretzelBox which relies on S3 objects and I was looking for a way to allow people to edit files concurrently.
A second reading earlier today of a few blog posts on the subject of Durable Objects has been invaluable in clearing up a few things.
I'll expand on what exactly Durable Objects are in this post.
But first, What They Are Not
Durable Objects are not files or S3 style objects though Durable Objects are written to disk when not in use.
For one, there is no listing of Durable Objects anywhere. There is no public API to read back a Durable Object. Similarly, there is no API to read specific bytes (unlike S3 Select) from a Durable Object neither can you pass in a new file object to replace the existing Durable Object.
All said, conflating S3 Objects and Durable Objects was my original error and that set me back months. You should not make that mistake.
So What are They
Imagine the app was not running on your computer. Instead, the app was running on CloudFlare. Of course, CloudFlare is just a name given to a specific set of servers and assets which means that your app is running on machines other than your own.
With me so far?
...and CloudFlare has decided to embed each instance into a specific machine and made that object's id publicly addressable.
If each instance were a person, this is what your app would look like (Courtesy StorySet)
This is what they are calling Durable Objects.
This sounds like...
In AWS terms, a Durable Object is essentially a combination of a Lambda function and an associated S3 object. Even AWS recognizes this pairing and has a product called Object Lambdas.
But there is a catch in this mental model.
You can have any number of instances of a Lambda running at any given moment. Where Durable Objects stand out is that they guarantee if you want, multiple requests will go to the same `Lambda` (so to speak) which will take care of coordinating between them.
Conclusion (with an example)
Superficially, this seems like a nice-ish little tool to have in your serverless toolkit but the more you think about it, the more radical it starts to seem.
- Your serverless compute is now addressable. It is now possible to coordinate between different users through a single compute instance.
- Your content is durable without having to worry about setting up specific S3 Buckets and Keys.
A central tenet of HTTP is to have addressable content (resources). Durable Objects adds addressable compute. The combination can be killer for the right use case.
Say you need to maintain per user API usage counts. You could simply call the right Durable Object after fielding every request to update the usage count.
Historically, API usage to be maintained by a database entry with a piece of compute running in front of it to handle updates.
Anyway, that's it from me.
I wanted to write this post to better understand Durable Objects and help others trying to wrap their heads around the concept. Hope it helps someone.