Update: Just squashed a race condition in the OTP worker that was affecting some early logins. If you had trouble an hour ago, try again!
Source has also been updated on Git repo to align with the latest release - thanks to mdmelo for reporting that.
A database breach wouldn't compromise your files because we never store the secrets needed to access them.
When you start an upload, the API generates a unique bID and secret, then provides a short-lived presigned URL. Your file transfers directly from your terminal to S3—it never passes through our servers. Once the upload completes, the bID and secret are returned to your CLI.
The secret is immediately discarded by our backend. Only its cryptographic hash is stored in our database.
In the current release, files rely on S3's server-side encryption. The bID + secret function as access control mechanisms rather than encryption keys. The zero-knowledge aspect refers to us never storing the secrets themselves - only cryptographic hashes - so we can't authorize downloads even if our database is compromised.
I've been actively working on adding client-side encryption to the CLI so files are encrypted by the binary before upload. The CLI implementation is ready, but I'm working through aligning the web dashboard with these changes. Should have it deployed in the next few days.
How does this compare to something like magic-wormhole?
Is the CLI able to write to stdout (to pipe into other commands, like tar/ssh) ?
Update: Just squashed a race condition in the OTP worker that was affecting some early logins. If you had trouble an hour ago, try again! Source has also been updated on Git repo to align with the latest release - thanks to mdmelo for reporting that.
how are you handling the encryption keys? is it encrypted client-side before upload or handled on the backend?
basically, if your db gets breached can someone access my decrypted files?
A database breach wouldn't compromise your files because we never store the secrets needed to access them.
When you start an upload, the API generates a unique bID and secret, then provides a short-lived presigned URL. Your file transfers directly from your terminal to S3—it never passes through our servers. Once the upload completes, the bID and secret are returned to your CLI.
The secret is immediately discarded by our backend. Only its cryptographic hash is stored in our database.
Wait, but where’s the file encryption happening?
In the current release, files rely on S3's server-side encryption. The bID + secret function as access control mechanisms rather than encryption keys. The zero-knowledge aspect refers to us never storing the secrets themselves - only cryptographic hashes - so we can't authorize downloads even if our database is compromised.
I've been actively working on adding client-side encryption to the CLI so files are encrypted by the binary before upload. The CLI implementation is ready, but I'm working through aligning the web dashboard with these changes. Should have it deployed in the next few days.
Love it!! Easily helped me move some large files! Super easy to use as well.
Timesaver and easy to use