I’m writing a book of simple techniques to help developers improve their writing.My book will teach you how to:Create clear and pleasant software tutorialsAttract readers and customers through bloggingWrite effective emailsMinimize pain in writing design documents
Uploading to Backblaze backends no longer works 🔗︎
Restore fails with transaction not available 🔗︎
Litestream is an open-source tool that backs up SQLite databases to cloud storage in real time. I love it and use it in all of my projects.
Litestream is owned by Fly.io, and they paused development on Litestream for almost two years in favor of an alternative project called LiteFS. Two weeks ago, Ben Johnson, Litestream’s creator and lead developer, announced that they were shifting focus back to Litestream and had just published a new release, 0.5.0.
I tried out Litestream 0.5.0, but I caution other Litestream users to give it another release and more extensive testing before deploying it in production. I had a bumpy experience migrating to the new version of Litestream.
There are two tasks that are intentional in upgrading from previous versions of Litestream to v0.5.0 and above:
One of the benefits of Litestream 0.5.0 is that there’s now an official litestream Docker image. (Edit: Reader placardloop points out that the Docker image is not new; I just never noticed it.) All of my previous Docker containers required a lot of boilerplate to download the correct version of Litestream and make it available in my container, but now it reduces to a single Dockerfile line:
To test out Litestream 0.5.0, I tried deploying it on my project, What Got Done. This is a good project for testing because:
To start the migration, I downloaded the latest copy of my data using Litestream 0.3.13 and then tried to use Litestream 0.5.0 to upload it back to Backblaze’s cloud storage in Litestream’s new format. But I hit this error:
error” db=store.db replica=s3 error=”write ltx file: s3: upload to db/0000/0000000000000001-0000000000000001.ltx: operation error S3: PutObject, resolve auth scheme: resolve endpoint: endpoint rule error, Custom endpoint `s3.us-west-002.backblazeb2.com` was not a valid URI”
I tried several alternative ways of specifying the Backblaze S3 endpoint, but Litestream rejected them all as configuration errors before even attempting to back up. The configuration I had was the only one that Litestream accepted as valid configuration, but it failed to back up.
I filed Backblaze replica fails with “Custom endpoint … was not a valid URI” #789, and Litestream developer Cory LaNou fixed it the next day.
Now that I was able to upload data to Backblaze in Litestream’s new format, I was unblocked from integrating Litestream 0.5.0 into What Got done.
$ litestream restore -help | grep if-replica-exists –after-context=1 -if-replica-exists Returns exit code of 0 if no backups found.
Undeterred by the loss of -if-replica-exists, I removed it from my start script. But then my server failed to start with a new error:
That turns out to match this open Litestream issue, with an alarming severity of “CRITICAL – Complete Data Loss”:
At this point, I was just willing to try anything to get back up and running, so I ran the latest bleeding edge version of Litestream by building it from source in my Docker container.
Fortunately, the latest version got around whatever transaction not available issue I was hitting, and Litestream made it further in the process!
level=ERROR msg=”failed to run” error=”create temp database path: open /app/data/store.db.tmp: no such file or directory”
I checked the source and saw that the folder creation logic had disappeared in this code flow, but it was simple enough to fix, so I created a fix:
I was able to get Litestream 0.5.x working with a pre-release fork, but I’m going to hold off deploying it to my other projects for another release or two. The 0.5.0 changes seem to have been more disruptive than the Litestream folks expected, and they’re still struggling with some serious bugs:
And there are several other serious bugs that they’ve fixed in the development version but are not yet in a production release (Update: these are now fixed in 0.5.1):
Note: Again, this is not a criticism of Litestream. Streaming replication is hard to do correctly, and what they’re doing is way more robust than what I’d be able to produce. I’m grateful to the Litestream team for responding to bug reports and fixing issues so quickly.
Read My BookI’m writing a book of simple techniques to help developers improve their writing.My book will teach you how to:Create clear and pleasant software tutorialsAttract readers and customers through bloggingWrite effective emailsMinimize pain in writing design documentsRead Michael’s Book
I’m writing a book of simple techniques to help developers improve their writing.My book will teach you how to:Create clear and pleasant software tutorialsAttract readers and customers through bloggingWrite effective emailsMinimize pain in writing design documents
Update: Litestream 0.5.1 is now available and fixes most (but not all) of the issues I encountered.
Litestream has published a helpful migration guide with more details.
COPY –from=litestream/litestream:0.5.0 /usr/local/bin/litestream /app/litestream
The same replica definition had worked in previous versions, so I was a bit puzzled.
access-key-id: ${LITESTREAM_ACCESS_KEY_ID} secret-access-key: ${LITESTREAM_SECRET_ACCESS_KEY} dbs: – path: ${DB_PATH} replica: url: s3://${LITESTREAM_BUCKET}/db endpoint: ${LITESTREAM_ENDPOINT}
I deployed Litestream 0.5.0 to What Got Done, but the server failed to boot with this error:
I checked the command documentation, and it said that -if-replica-exists was still supported:
level=ERROR msg=”failed to run” error=”cannot calc restore plan: transaction not available”
Unfortunately, there was still one error to overcome:
With my fork of Litestream with the final mkdir fix applied, What Got Done was back up and running!
I’m writing a book of simple techniques to help developers improve their writing.
It turns out that the flag was removed by mistake and will be back in 0.5.1.



You must be logged in to post a comment.