PostgreSQL: Implementing fixed-size tables with triggers

Updated: February 1, 2024 By: Guest Contributor Post a comment

Understanding the Basic Concepts

In the vast expanse of database management, a not-so-common but critical need is to maintain fixed-size tables in PostgreSQL. This necessity arises in scenarios where you need to exercise tight control over the amount of data stored; for instance, in caching systems, logging with rotation, and predefined quota storage, among others. In this tutorial, you’ll learn how to implement fixed-size tables in PostgreSQL using triggers, maintaining the table size to a specified number of records.

Understanding Fixed-Size Tables

Fixed-size tables restrict the number of records to a predetermined limit. Once this limit is exceeded, older records are discarded to make room for new ones. This balancing act is commonly managed by the FIFO (First In, First Out) approach. In PostgreSQL, there are no built-in table types to enforce such a limit; however, this behavior can be achieved using database triggers.

What are Triggers?

Triggers in PostgreSQL are database callback functions that are automatically performed or ‘triggered’ when certain database events occur, such as INSERT, UPDATE, or DELETE operations. They are particularly useful in enforcing constraints, maintaining audit logs, and synchronous replication of tables.

Set Up Your PostgreSQL Environment

-- Sample database setup
CREATE DATABASE fixed_size_db;
\connect fixed_size_db;

Once your environment is set up, let’s start by creating a sample fixed-size table.

Creating a Fixed-Size Table

CREATE TABLE log_messages (
  id SERIAL PRIMARY KEY,
  log_text TEXT NOT NULL,
  log_time TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);

Now that you have a log table, let’s define its size limit and then create a trigger function to enforce this limit.

Defining Table Size Limit

-- We'll use a PL/pgSQL variable to store the table size limit
DO $$
BEGIN
  IF NOT EXISTS (
    SELECT 1
    FROM pg_catalog.pg_class c
    JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
    WHERE c.relname = 'table_size_limit'
    AND n.nspname = 'public'
  ) THEN
    CREATE TABLE public.table_size_limit (
      table_name TEXT UNIQUE NOT NULL,
      size_limit INT NOT NULL
    );
  END IF;
END;
$$;

-- Insert the size limit for our log_messages table
INSERT INTO public.table_size_limit VALUES ('log_messages', 100000);

While the ‘table_size_limit’ is in itself not a constraining factor, it provides the metadata which our trigger will rely on for threshold checks.

Creating the Trigger Function

The trigger function will be responsible for deleting excess records after an insert operation is completed on the ‘log_messages’ table.

-- The function for maintaining our fixed-sized table
CREATE OR REPLACE FUNCTION fn_trim_log_messages()
RETURNS TRIGGER AS $$
DECLARE
  v_limit INT;
  v_count INT;
BEGIN
  -- Find the limit for the current table
  SELECT size_limit INTO v_limit
  FROM public.table_size_limit
  WHERE table_name = 'log_messages';

  -- Count the existing records
  SELECT count(*) INTO v_count FROM log_messages;

  -- Check and remove the oldest entries if the limit is exceeded
  IF v_count > v_limit THEN
    DELETE FROM log_messages
    WHERE id IN (SELECT id FROM log_messages
    ORDER BY log_time ASC
    LIMIT v_count - v_limit);
  END IF;

  -- Complete the trigger function
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Attach the trigger to our table
CREATE TRIGGER trg_trim_log_messages
AFTER INSERT ON log_messages
FOR EACH ROW EXECUTE FUNCTION fn_trim_log_messages();

What’s crucial to note is that this trigger function named ‘fn_trim_log_messages’ is a PL/pgSQL function which is connected to our log_messages table through the trg_trim_log_messages trigger. The AFTER INSERT timing ensures this function is called post-insert so as not to interfere with the incoming data stream.

Testing the Trigger

Let’s put our setup to test by inserting a sample of records and see if the table surpasses the predetermined limit of 100,000 records.

-- Let's generate some logs
INSERT INTO log_messages (log_text)
SELECT
  'Log Message ' || i
FROM generate_series(1, 100100) AS s(i);

-- Count the records to see if the trigger has worked
SELECT count(*) FROM log_messages;

-- The count should not exceed 100,000

For a table scheduled to hold a maximum of 100,000 records, the count should return just that total, proving that the trigger effectively trimmed the excess.

Considerations and Performance

There are a few considerations when implementing fixed-size tables:

  • Performance: Deletion of records, if often triggered, can be performance-intensive especially in tables with a large number of columns or high write/read throughput.
  • Transactional Safety: Implementations like this must carefully handle transaction behavior to avoid unexpected loss of data or inconsistencies.
  • Archiving: In some cases, rather than deleting old records, it might be more beneficial to archive them. The implementation logic would slightly adjust to move data rather than discard it.

Throughout this guide, we’ve gone through the process of implementing fixed-size tables by leveraging PostgreSQL triggers. This technique is a powerful method for managing table sizes and can be applied across various cases where sizing constraints are necessary for database performance and management.

Conclusion

Fixed-size tables are a necessity in specific cases for PostgreSQL database management. With triggers, you can maintain the stable and fast performance of a table without manual oversight. Customize the trigger to fit your use-case, and manage your PostgreSQL databases more effectively.