# Difference between revisions of "Fast GiST index build GSoC 2011"

## Idea

Currently GiST index don't have any bulk load functionality. It have to create new index by entry insertion one by one. This makes new index creation relatively slow in comparison with btree and even GIN. There are various works in computer science about bulk operation on R-tree. Since GiST in basic is generalization of R-tree, some of them seems to be applicable to GiST. In [1] R-tree bulk operations techniques are presented. Despite some issues related with GiST distinction from R-tree, techniques of this paper seems to be applicable to GiST.

## Details

#### Brief description of algorithm

Paper [1] present R-tree bulk operations techniques which are, in general, application of buffer tree [2] to R-tree.

The technique of the paper [1] can be very briefly described in following rules.

M = number of index keys fitting in RAM;

B = number of index keys in one page;

Additional buffers of M/(2*B) pages each is attached to all nodes of some levels. Levels are selected with step floor(log(M/4B, B))), leaf nodes don't contain buffers. I.e. nodes in levels i*floor(log(M/4B, B))), i = 1,2,3,... contain buffers (numbering is going from down to up, level 0 contain leaf nodes). When entry reaches node with buffer, it is placed into buffer. When buffer is overflowed it runs down into lower buffers or leaf pages. When split occurs in node with buffer, then this buffers splits into two buffers using penalty function.

#### Varlena keys

Since R-tree was cosidered in paper [1], key length is fixed. In GiST we can deal with varlena keys that can leads complexities. For example, page split can occurs during placing index entry into appropriate buffers. Another difficulty with varlena keys is that size of buffer and level step of buffers are calculated by the number of entries in page. When using varlena keys this number is not constant.

The first implementation will have very simple approach to deal with varlena keys. The greatest buffer size and minimal level step are achived when key size is minimal. Thereby, minimal key size is worst case. Since minimal varlena size is 4 bytes, we can use it in initial calculations.

### Another possible applications

• Bulk insert. Since access method interfae in PostgreSQL doesn't contain bulk insert function, bulk insert operations are performed by entries insertion one by one. And access method doesn't know how many entries is expected. This circumstance makes application of bulk insert technique difficult for PostgreSQL GiST. With current access method interface bulk insert can be implemented using technique like fastupdate in GIN. But let us note that such technique lead concurrent index searches to scan buffers. Since, buffer size is significant (the size of most lowest buffer is comparable with size of whole subtree), it can lead to significant slowdown. The finding of compromise between index build acceleration and concurrent search slowdown is a separate research, and it doesn't seem to be feasible to fit it in the GSoC.
• Bulk delete. Current bulk delete operation in PostgreSQL GiST is fast, but it doesn't perform index restructuring. These can lead to worsening of index during bulk delete. The bulk delete technique in the paper [1] focused on acceleration of bulk delete procedure which perform index restructuring. The problem of application of this approach to GiST is that GiST implementation doesn't provide any explicit guarantees about storage utilization. If such guarantees exists then they are hidden in the picksplit access method. Application of bulk delete technique requires GiST access methos to know about storage utilization guarantees of implementation. In turn it require a GiST operator class refactoring, and it doesn't seem to be feasible to fit it in the GSoC.

## Shedule

until 3 june

First, approximate implementation.

4 june - 6 june

First benchmarking.

7 june - 14 july

Work on index quality problem. Tests on various GiST applications and data distributions.

15 july - 31 july

Detail benchmarks on most wide range of datasets.

August 1 - August 16:

Final refactoring, testing and commit.

## TODO

• Solve index quality problem
• Test with level step > 1
• Test with varlena keys
• Revise data structures. Remove unnecessary fields.