In this tutorial, we will explain how to easily limit the size of a PostgreSQL table (by size we mean number of rows). To achieve this, we will create a trigger on the table via Java JDBC. The trigger will be atomically executed after each SQL insert, and the insert that causes the trigger to throw an exception will be rolled back. Create a file called trigger.sql in your Java resources folder, with the following content (replace $table and $max with your own values):
CREATE OR REPLACE FUNCTION check_number_of_row()
RETURNS TRIGGER AS $$
BEGIN
IF (SELECT COUNT(*) FROM $table) >= $max THEN
RAISE EXCEPTION 'Table is full';
else
return NEW;
END IF;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER tr_check_number_of_row
AFTER INSERT ON $table
FOR EACH ROW EXECUTE PROCEDURE check_number_of_row();
Load the sql file in Java:
InputStream is = getClass().getClassLoader().getResourceAsStream("trigger.sql");
String sql = new String(is.readAllBytes(), StandardCharsets.UTF_8);
Then, starting from a java.sql.Connection object to your PostgreSQL database, create the trigger with the following code:
java.sql.Connection connection;
Statement stmt = connection.createStatement();
stmt.executeUpdate(sql);
stmt.close();
To test the trigger, you can try to insert more rows than $max into the table via JDBC. You should get the following exception:
ERROR: Table is full Where: PL/pgSQL function check_number_of_row() line 4 at RAISE
The latest inserted row that violates the trigger table size limit is then rolled back. Hence, we have the guarantee that the table size never exceeds the size limit $max. Finally, in Postgres (see bottom of the page):
Functions and trigger procedures are always executed within a transaction
so we won’t have concurrency problems if multiple insertions happen concurrently.