The Supabase platform automatically optimizes your Postgres database to take advantage of the compute resources of the tier your project is on. However, these optimizations are based on assumptions about the type of workflow the project is being utilized for, and it is likely that better results can be obtained by tuning the database for your particular workflow.
Optimizing the number of connections
By default, the number of connections allowed to Postgres and PgBouncer is configured based on the resources available to the database.
|Compute Add-on||Postgresql connections||PGBouncer connections|
If the number of connections is insufficient, you will receive the following error upon connecting to the DB:
$ psql -U postgres -h ...
FATAL: remaining connection slots are reserved for non-replication superuser connections
In such a scenario, you can consider either upgrading to a larger compute add-on, or alternatively, by manually configuring the database for a higher number of connections.
You can configure Postgres by executing the following statement, followed by a server restart:
alter system set max_connections = '<val-here>';
Note that the default configuration used by the Supabase platform optimizes the database to maximize resource utilization, and as a result, you might also need to configure other options (e.g.
maintenance_work_mem) in order to tune them towards your use-case, and to avoid causing instability in your database.
Once overridden, the Supabase platform will continue to respect your manually configured value (even if the add-on size is changed), unless the override is removed with the following statement, followed by a server restart:
alter system reset max_connections;
alter system reset <other-overridden-conf>;
Configuring the number of PgBouncer connections is not supported at this time.
Examining Query Performance
Every Supabase project has the pg_stat_statements extension enabled by default. This extension records query execution performance details and is the best way to find queries that take the most time to execute. This information can be combined with the Postgres query plan analyzer to develop more efficient queries.
Obtaining information from pg_stat_statements:
select mean_exec_time + stddev_exec_time, * from pg_stat_statements order by 1 desc;
Using the query plan analyzer on your expensive queries:
explain analyze <query-statement-here>;
Be careful using
explain analyze with
delete queries, because the query will actually run, and could have unintended side-effects.
Using the query plan analyzer to optimize your queries is a large topic, with a number of online resources available:
You can pair the information available from
pg_stat_statements with the detailed system metrics available via your metrics endpoint to better understand the behavior of your DB and the queries you're executing against it.