PGD AutoPartition v5.6
PGD AutoPartition allows you to split tables into several partitions. It lets tables grow easily to large sizes using automatic partitioning management. This capability uses features of PGD, such as low-conflict locking of creating and dropping partitions.
You can create new partitions regularly and then drop them when the data retention period expires.
You perform PGD management primarily by using functions that can be called by SQL.
All functions in PGD are exposed in the bdr
schema. Unless you put it into
your search_path, you need to schema qualify the name of each function.
Auto creation of partitions
PGD AutoPartition uses the bdr.autopartition()
function to create or alter the definition of automatic range partitioning for a table. If
no definition exists, it's created. Otherwise, later executions will alter the
definition.
PGD AutoPartition in PGD 5.5 and later leverages underlying Postgres features that allow a partition to be attached or detached/dropped without locking the rest of the table. Versions of PGD earlier than 5.5 don't support this feature and lock the tables.
An error is raised if the table isn't RANGE partitioned or a multi-column partition key is used.
By default, AutoPartition manages partitions globally. In other words, when a partition is created on one node, the same partition is created on all other nodes in the cluster. Using the default makes all partitions consistent and guaranteed to be available. For this capability, AutoPartition makes use of Raft.
You can change this behavior by setting managed_locally
to true
. In that
case, all partitions are managed locally on each node. Managing partitions
locally is useful when the partitioned table isn't a replicated table. In that
case, you might not need or want to have all partitions on all nodes. For
example, the built-in
bdr.conflict_history
table isn't a replicated table. It's managed by AutoPartition locally. Each node
creates partitions for this table locally and drops them once they're old
enough.
Also consider:
You can't later change tables marked as
managed_locally
to be managed globally and vice versa.Activities are performed only when the entry is marked
enabled = on
.We recommend that you don't manually create or drop partitions for tables managed by AutoPartition. Doing so can make the AutoPartition metadata inconsistent and might cause it to fail.
AutoPartition examples
Daily partitions, keep data for one month:
Create five advance partitions when there are only two more partitions remaining. Each partition can hold 1 billion orders.
RANGE-partitioned tables
A new partition is added for every partition_increment
range of values.
Lower and upper bound are partition_increment
apart. For tables with a partition
key of type timestamp
or date
, the partition_increment
must be a valid
constant of type interval
. For example, specifying 1 Day
causes a new
partition to be added each day, with partition bounds that are one day apart.
If the partition column is connected to a snowflakeid
, timeshard
, or
ksuuid
sequence, you must specify the partition_increment
as type
interval
. Otherwise, if the partition key is integer or numeric, then the
partition_increment
must be a valid constant of the same datatype. For
example, specifying 1000000
causes new partitions to be added every 1 million
values.
If the table has no existing partition, then the specified
partition_initial_lowerbound
is used as the lower bound for the first
partition. If you don't specify partition_initial_lowerbound
, then the system
tries to derive its value from the partition column type and the specified
partition_increment
. For example, if partition_increment
is specified as 1
Day
, then partition_initial_lowerbound
is set to CURRENT DATE. If
partition_increment
is specified as 1 Hour
, then
partition_initial_lowerbound
is set to the current hour of the current date.
The bounds for the subsequent partitions are set using the partition_increment
value.
The system always tries to have a certain minimum number of advance partitions.
To decide whether to create new partitions, it uses the specified
partition_autocreate_expression
. This can be an expression that can be
evaluated by SQL that's evaluated every time a check is performed. For
example, for a partitioned table on column type date
, suppose
partition_autocreate_expression
is specified as
DATE_TRUNC('day',CURRENT_DATE)
, partition_increment
is specified as 1 Day
,
and minimum_advance_partitions
is specified as 2
. New partitions are then
created until the upper bound of the last partition is less than
DATE_TRUNC('day', CURRENT_DATE) + '2 Days'::interval
.
The expression is evaluated each time the system checks for new partitions.
For a partitioned table on column type integer
, you can specify the
partition_autocreate_expression
as SELECT max(partcol) FROM
schema.partitioned_table
. The system then regularly checks if the maximum value
of the partitioned column is within the distance of minimum_advance_partitions * partition_increment
of the last partition's upper bound. Create an index on
the partcol
so that the query runs efficiently. If you don't specify the
partition_autocreate_expression
for a partition table on column type
integer
, smallint
, or bigint
, then the system sets it to max(partcol)
.
If the data_retention_period
is set, partitions are dropped after this period.
To minimize locking, partitions are dropped at the same time as new partitions are added.
If you don't set this value, you must drop the partitions manually.
The data_retention_period
parameter is supported only for timestamp-based (and
related) partitions. The period is calculated by considering the upper
bound of the partition. The partition is dropped if the given period expires, relative to the
upper bound.
Stopping automatic creation of partitions
Use
bdr.drop_autopartition()
to drop the autopartitioning rule for the given relation. All pending work items
for the relation are deleted, and no new work items are created.
Waiting for partition creation
Partition creation is an asynchronous process. AutoPartition provides a set of functions to wait for the partition to be created, locally or on all nodes.
Use
bdr.autopartition_wait_for_partitions()
to wait for the creation of partitions on the local node. The function takes the
partitioned table name and a partition key column value and waits until the
partition that holds that value is created.
The function waits only for the partitions to be created locally. It doesn't guarantee that the partitions also exist on the remote nodes.
To wait for the partition to be created on all PGD nodes, use the
bdr.autopartition_wait_for_partitions_on_all_nodes()
function. This function internally checks local as well as all remote nodes and
waits until the partition is created everywhere.
Finding a partition
Use the
bdr.autopartition_find_partition()
function to find the partition for the given partition key value. If a partition
to hold that value doesn't exist, then the function returns NULL. Otherwise it returns the Oid
of the partition.
Enabling or disabling autopartitioning
Use
bdr.autopartition_enable()
to enable autopartitioning on the given table. If autopartitioning is already
enabled, then no action occurs. Similarly, use
bdr.autopartition_disable()
to disable autopartitioning on the given table.
Restrictions on EPAS-native automatic partitioning
EPAS-native automatic partitioning is not supported on PGD.
If the PGD extension is active on an EPAS database, DDL commands to configure
EPAS automatic partitioning (ALTER TABLE ... SET AUTOMATIC
and ALTER TABLE ... SET INTERVAL
)
will be rejected.
Note that while it is possible to enable the PGD extension on an EPAS database containing tables configured to use EPAS-native automatic partitioning, it will not be possible to join further nodes using this node as a source node.
EPAS-native automatic partitioning can be disabled with one of the following commands:
ALTER TABLE ... SET MANUAL
(for list partitioned tables)ALTER TABLE ... SET INTERVAL ()
(for interval partitioned tables)