gas-centroids / README.md
suyeong-park's picture
Update README.md
b504de3 verified
---
task_categories:
- text-retrieval
- sentence-similarity
language:
- en
tags:
- embeddings
- vector-database
- benchmark
---
# GAS Indexing Artifacts
## Dataset Description
This dataset contains pre-computed deterministic centroids and associated geometric metadata generated using our GAS (Geometry-Aware Selection) algorithm.
These artifacts are designed to benchmark Approximate Nearest Neighbor (ANN) search performance in privacy-preserving or dynamic vector database environments.
### Purpose
To serve as a standardized benchmark resource for evaluating the efficiency and recall of vector databases implementing the GAS architecture.
It is specifically designed for integration with VectorDBBench.
### Dataset Summary
- **Source Data**: Wikipedia (Public Dataset)
- **Embedding Model**: [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m)
## Dataset Structure
For each embedding model, the directory contains two key file:
| Data | Description |
|-------|-------------|
| `centroids.npy` | centroids as followed IVF |
| `tree_info.pkl` | tree metadata with parent and leaf info |
## Data Fields
### Centroids: `centroids.npy`
- **Purpose**: Finding the nearest clusters for IVF (Inverted File Index)
- **Type**: NumPy array (`np.ndarray`)
- **Shape**: `[32768, 768]`
- **Description**: 768-dimensional vectors representing 32,768 cluster centroids
- **Normalization**: L2-normalized (unit norm)
- **Format**: float32
### Tree Metadata: `tree_info.pkl`
- **Purpose**: Finding virtual clusters following hierarchical tree structure for efficient GAS search
- **Type**: Python dictionary (pickle)
- **Keys**:
- `node_parents`: Dictionary mapping each node ID to its parent node ID
- Format: `{node_id: parent_node_id, ...}`
- Contains parent-child relationships for all nodes in the tree
- `leaf_ids`: List of leaf node IDs
- Format: `[leaf_id_1, leaf_id_2, ..., leaf_id_32768]`
- Total 32,768 leaf nodes (corresponding to 32,768 centroids)
- `leaf_to_centroid_idx`: Mapping from leaf node IDs to centroid indices in `centroids.npy`
- Format: `{leaf_node_id: centroid_index, ...}`
- Maps each leaf node to its corresponding row index in `centroids.npy`
- Important: Leaf IDs in `leaf_ids` are ordered sequentially, so the i-th leaf corresponds to the i-th centroid
## Dataset Creation
### Source Data
Source dataset is a large public dataset, Wikipedia: [mixedbread-ai/wikipedia-data-en-2023-11](https://huggingface.co/datasets/mixedbread-ai/wikipedia-data-en-2023-11).
### Preprocessing
1. Create Centroids by GAS approach:
Description TBD
2. Chunking: For texts exceeding 2048 tokens:
- Split into chunks with ~100 token overlap
- Embedded each chunk separately
- Averaged chunk embeddings for final representation
3. Normalization: All embeddings are L2-normalized
### Embedding Generation
- Model: google/embeddinggemma-300m
- Dimension: 768
- Max Token Length: 2048
- Normalization: L2-normalized
## Usage
```python
import wget
def download_centroids(embedding_model: str, dataset_dir: str) -> None:
"""Download pre-computed centroids and tree info for GAS."""
dataset_link = "https://huggingface.co/datasets/cryptolab-playground/gas-centroids/resolve/main/embeddinggemma-300m"
wget.download(f"{dataset_link}/centroids.npy", out="centroids.npy")
wget.download(f"{dataset_link}/tree_info.pkl", out="tree_info.pkl")
```
## License
Apache 2.0
## Citation
If you use this dataset, please cite:
```bibtex
@dataset{gas-centroids,
author = {CryptoLab, Inc.},
title = {GAS Centroids},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/cryptolab-playground/gas-centroids}
}
```
### Source Dataset Citation
```bibtex
@dataset{wikipedia_data_en_2023_11,
author = {mixedbread-ai},
title = {Wikipedia Data EN 2023 11},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/mixedbread-ai/wikipedia-data-en-2023-11}
}
```
### Embedding Model Citation
```bibtex
@misc{embeddinggemma,
title={Embedding Gemma},
author={Google},
year={2024},
url={https://huggingface.co/google/embeddinggemma-300m}
}
```
### Acknowledgments
- Original dataset: mixedbread-ai/wikipedia-data-en-2023-11
- Embedding model: google/embeddinggemma-300m
- Benchmark framework: VectorDBBench