>Without those periodic full page images in the log, the storage layer would have to replay an infinitely long chain of small deltas to reconstruct a page for a read request. What was once a bounded O(checkpoint frequency) replay becomes an unbounded chain, leading to a spike in read latency and resource consumption.
I don't follow: read requests are not served from the WAL. They read the current state of the page from the buffer cache, where the page is updated after the change (FPI or not) is written to the WAL.
This applies to our storage implementation. In Lakebase architecture storage serves pages and it doesn't always have the most recent version of the page and therefore it reconstructs it on demand.
In the past we relied on Postgres compute to periodically send a full page so reconstructive a page was always a bounded process. Once we turned it off (and got all those perf gains) we got another problem: unbounded page reconstruction which we had to solve separately.
Thanks for offering. In the graph labeled "Prod customer throughput: (higher is better)" eyeballing it within a week you are seeing ~2k qps peak increase over the previous week.
Operationally, how do you handle landing that large of a perf improvement? If my data store changed that much in a week it could break something.
Read replicas can be "shallow". You don't need to replicate all the data to create a replica. This allows to create them very very quickly (sub second).
All the extension still work. We don't support Citus today, but mostly because customers are not asking for it rather due to technical limitations. We support lots of extensions: https://docs.databricks.com/aws/en/oltp/projects/extensions
Everyone thinks they need a data lake when most people just need a data pond or data puddle. This is made worse by the industry disappearance of the DBA role and compounded by the fact that PG is not especially easy to tune.
All of this to say that a ton of people are on some sort of managed cloud postgres where the compute is almost always separated from the storage even for the small instances.
Neon et al. will tell you they scale, and I am sure they can but the number of enterprises that actually exceed when can be put on a few large servers in pretty low. You gotta lock them in early so their orgs never develop the expertise to move off on the off chance they get big.
>Without those periodic full page images in the log, the storage layer would have to replay an infinitely long chain of small deltas to reconstruct a page for a read request. What was once a bounded O(checkpoint frequency) replay becomes an unbounded chain, leading to a spike in read latency and resource consumption.
I don't follow: read requests are not served from the WAL. They read the current state of the page from the buffer cache, where the page is updated after the change (FPI or not) is written to the WAL.
This applies to our storage implementation. In Lakebase architecture storage serves pages and it doesn't always have the most recent version of the page and therefore it reconstructs it on demand.
In the past we relied on Postgres compute to periodically send a full page so reconstructive a page was always a bounded process. Once we turned it off (and got all those perf gains) we got another problem: unbounded page reconstruction which we had to solve separately.
I'm a VP on Databricks and former CEO of Neon. Happy to answer performance related or any other questions here.
Thanks for offering. In the graph labeled "Prod customer throughput: (higher is better)" eyeballing it within a week you are seeing ~2k qps peak increase over the previous week.
Operationally, how do you handle landing that large of a perf improvement? If my data store changed that much in a week it could break something.
How does it affect HA postgres? (Replicas, consensus, etc). Especially with extensions like citus.
This specific perf improvement is orthogonal to HA.
However generally disaggregating storage makes HA simpler and allows for things like zero downtime patching: https://www.databricks.com/blog/zero-downtime-patching-lakeb...
Read replicas can be "shallow". You don't need to replicate all the data to create a replica. This allows to create them very very quickly (sub second).
All the extension still work. We don't support Citus today, but mostly because customers are not asking for it rather due to technical limitations. We support lots of extensions: https://docs.databricks.com/aws/en/oltp/projects/extensions
Im not a proper DBA, but oversee some basic postgres installs (read: logging, monitoring, upgrades).
This appears to only have any effect with datalake style installs, where storage is separate from compute.
Not going to have any effect on those small postgres installs for that generic one off app.
Everyone thinks they need a data lake when most people just need a data pond or data puddle. This is made worse by the industry disappearance of the DBA role and compounded by the fact that PG is not especially easy to tune.
All of this to say that a ton of people are on some sort of managed cloud postgres where the compute is almost always separated from the storage even for the small instances.
Neon et al. will tell you they scale, and I am sure they can but the number of enterprises that actually exceed when can be put on a few large servers in pretty low. You gotta lock them in early so their orgs never develop the expertise to move off on the off chance they get big.
We provide you fully managed Postgres. Lots of our customers use it for lots of small instances of Postgres since using Lakebase is so lightweight.
Small and large instances benefit from this performance optimization.