Update README.md
Browse files
README.md
CHANGED
|
@@ -13,12 +13,12 @@ size_categories:
|
|
| 13 |
---
|
| 14 |
|
| 15 |
<h1 align="center">
|
| 16 |
-
|
| 17 |
</h1>
|
| 18 |
|
| 19 |
-
MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. MINT-1T is designed to facilitate research in multimodal pretraining. MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
|
| 20 |
|
| 21 |
-
You are currently viewing the HTML subset of MINT-1T. For PDF and ArXiv subsets, please refer to the [MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
|
| 22 |
|
| 23 |
## Dataset Details
|
| 24 |
|
|
@@ -34,21 +34,21 @@ You are currently viewing the HTML subset of MINT-1T. For PDF and ArXiv subsets,
|
|
| 34 |
|
| 35 |
<!-- This section describes suitable use cases for the dataset. -->
|
| 36 |
|
| 37 |
-
MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
|
| 38 |
|
| 39 |
### Out-of-Scope Use
|
| 40 |
|
| 41 |
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
| 42 |
|
| 43 |
-
MINT-1T was built to make research into large multimodal models more accessible. Using
|
| 44 |
the dataset to train models that ingest or generate personally identifying information (such
|
| 45 |
-
as images of peopleβs faces and other sensitive content) as well as military applications are all inappropriate use cases of MINT-1T.
|
| 46 |
|
| 47 |
## Dataset Creation
|
| 48 |
|
| 49 |
### Curation Rationale
|
| 50 |
|
| 51 |
-
MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
|
| 52 |
|
| 53 |
### Source Data
|
| 54 |
|
|
@@ -58,7 +58,7 @@ The dataset is a comprehensive collection of multimodal documents from various s
|
|
| 58 |
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
|
| 59 |
- ArXiv documents: A subset of papers from the ArXiv repository
|
| 60 |
|
| 61 |
-
In total, MINT-1T contains 1056.8 million documents, broken down as follows:
|
| 62 |
- 1029.4 million HTML documents
|
| 63 |
- 26.8 million PDF documents
|
| 64 |
- 0.6 million ArXiv documents
|
|
@@ -136,7 +136,7 @@ Given these considerations, the following recommendations are provided:
|
|
| 136 |
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
|
| 137 |
|
| 138 |
## License
|
| 139 |
-
We release MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
|
| 140 |
|
| 141 |
## Citation
|
| 142 |
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
<h1 align="center">
|
| 16 |
+
π MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
|
| 17 |
</h1>
|
| 18 |
|
| 19 |
+
π MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. MINT-1T is designed to facilitate research in multimodal pretraining. MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
|
| 20 |
|
| 21 |
+
You are currently viewing the HTML subset of π MINT-1T. For PDF and ArXiv subsets, please refer to the [π MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
|
| 22 |
|
| 23 |
## Dataset Details
|
| 24 |
|
|
|
|
| 34 |
|
| 35 |
<!-- This section describes suitable use cases for the dataset. -->
|
| 36 |
|
| 37 |
+
π MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
|
| 38 |
|
| 39 |
### Out-of-Scope Use
|
| 40 |
|
| 41 |
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
| 42 |
|
| 43 |
+
π MINT-1T was built to make research into large multimodal models more accessible. Using
|
| 44 |
the dataset to train models that ingest or generate personally identifying information (such
|
| 45 |
+
as images of peopleβs faces and other sensitive content) as well as military applications are all inappropriate use cases of π MINT-1T.
|
| 46 |
|
| 47 |
## Dataset Creation
|
| 48 |
|
| 49 |
### Curation Rationale
|
| 50 |
|
| 51 |
+
π MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
|
| 52 |
|
| 53 |
### Source Data
|
| 54 |
|
|
|
|
| 58 |
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
|
| 59 |
- ArXiv documents: A subset of papers from the ArXiv repository
|
| 60 |
|
| 61 |
+
In total, π MINT-1T contains 1056.8 million documents, broken down as follows:
|
| 62 |
- 1029.4 million HTML documents
|
| 63 |
- 26.8 million PDF documents
|
| 64 |
- 0.6 million ArXiv documents
|
|
|
|
| 136 |
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
|
| 137 |
|
| 138 |
## License
|
| 139 |
+
We release π MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
|
| 140 |
|
| 141 |
## Citation
|
| 142 |
|