html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | @lhoestq yes you are correct.
I am not allowed to save the "raw text" locally - The "raw text" must be saved only on S3.
I am allowed to save the output of any model locally.
It doesn't matter how I do it boto3/pandas/pyarrow, it is forbidden | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files... | 47 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_fi... | [
-0.2508120239,
-0.2407747656,
-0.0531519279,
0.503803134,
0.242677629,
0.0379794091,
0.4234101176,
0.2302230299,
0.0534565076,
0.012271638,
-0.0213181376,
0.3855957985,
-0.2364186496,
0.4087213278,
-0.073391445,
0.1519470215,
0.0036895594,
0.1174419373,
0.2415752858,
0.16631174... |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | @dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3://my-bucket/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk
**s... | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files... | 127 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_fi... | [
-0.2508120239,
-0.2407747656,
-0.0531519279,
0.503803134,
0.242677629,
0.0379794091,
0.4234101176,
0.2302230299,
0.0534565076,
0.012271638,
-0.0213181376,
0.3855957985,
-0.2364186496,
0.4087213278,
-0.073391445,
0.1519470215,
0.0036895594,
0.1174419373,
0.2415752858,
0.16631174... |
https://github.com/huggingface/datasets/issues/877 | DataLoader(datasets) become more and more slowly within iterations | Hi ! Thanks for reporting.
Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)
It would be nice to know whether it comes from the dataloader or not | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, th... | 38 | DataLoader(datasets) become more and more slowly within iterations
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(linel... | [
-0.3241669238,
-0.0680931062,
-0.1207574233,
0.2375428975,
-0.0479867347,
-0.0551255904,
0.4634111524,
-0.0104471454,
0.2775006294,
0.1358922571,
-0.0574945994,
0.4406973422,
-0.0379496254,
-0.0338125266,
-0.2684330642,
0.0123531884,
-0.0058119255,
0.1865818948,
-0.492304951,
-... |
https://github.com/huggingface/datasets/issues/877 | DataLoader(datasets) become more and more slowly within iterations | > Hi ! Thanks for reporting.
> Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)
> It would be nice to know whether it comes from the dataloader or not
I did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)... | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, th... | 64 | DataLoader(datasets) become more and more slowly within iterations
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(linel... | [
-0.316660136,
-0.1029364243,
-0.0979053453,
0.2440701127,
-0.0499429889,
-0.0601619482,
0.4438635409,
0.0588656813,
0.2712094188,
0.1299745142,
-0.1063352376,
0.4185296297,
-0.0331582054,
-0.0275142062,
-0.2983724177,
-0.0049311807,
0.0067098089,
0.1882977784,
-0.4768674076,
-0... |
https://github.com/huggingface/datasets/issues/876 | imdb dataset cannot be loaded | It looks like there was an issue while building the imdb dataset.
Could you provide more information about your OS and the version of python and `datasets` ?
Also could you try again with
```python
dataset = datasets.load_dataset("imdb", split="train", download_mode="force_redownload")
```
to make sure it's no... | Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/... | 51 | imdb dataset cannot be loaded
Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anacond... | [
-0.5215907693,
-0.0397975221,
-0.1209278107,
0.413828671,
0.3087592125,
0.3442182839,
0.4162841737,
0.4116173089,
0.1893083006,
-0.1259102225,
-0.2459376007,
-0.0363802686,
-0.2387998998,
0.1189185232,
-0.1006124169,
-0.193389222,
-0.109389469,
-0.0083493162,
0.0082315775,
0.02... |
https://github.com/huggingface/datasets/issues/876 | imdb dataset cannot be loaded | Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now | Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/... | 27 | imdb dataset cannot be loaded
Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anacond... | [
-0.5215907693,
-0.0397975221,
-0.1209278107,
0.413828671,
0.3087592125,
0.3442182839,
0.4162841737,
0.4116173089,
0.1893083006,
-0.1259102225,
-0.2459376007,
-0.0363802686,
-0.2387998998,
0.1189185232,
-0.1006124169,
-0.193389222,
-0.109389469,
-0.0083493162,
0.0082315775,
0.02... |
https://github.com/huggingface/datasets/issues/873 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | I see the issue happening again today -
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.... | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
... | 108 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (mo... | [
-0.2242927402,
0.1419457644,
-0.0225758627,
0.2384323776,
0.3819947541,
0.1261650026,
0.6168981194,
0.2147599757,
-0.0231773313,
0.1334126741,
-0.1637565494,
0.058236558,
-0.3759567142,
-0.1230778396,
0.0085836845,
0.1149686202,
-0.096401386,
0.1582956165,
-0.205109641,
-0.1422... |
https://github.com/huggingface/datasets/issues/873 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | > atal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = []
>
> NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
>
> Can someone please ta... | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
... | 199 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (mo... | [
-0.2242927402,
0.1419457644,
-0.0225758627,
0.2384323776,
0.3819947541,
0.1261650026,
0.6168981194,
0.2147599757,
-0.0231773313,
0.1334126741,
-0.1637565494,
0.058236558,
-0.3759567142,
-0.1230778396,
0.0085836845,
0.1149686202,
-0.096401386,
0.1582956165,
-0.205109641,
-0.1422... |
https://github.com/huggingface/datasets/issues/873 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | experience the same problem, ccdv/cnn_dailymail not working either.
Solve this problem by installing datasets library from the master branch:
python -m pip install git+https://github.com/huggingface/datasets.git@master | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
... | 24 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (mo... | [
-0.2242927402,
0.1419457644,
-0.0225758627,
0.2384323776,
0.3819947541,
0.1261650026,
0.6168981194,
0.2147599757,
-0.0231773313,
0.1334126741,
-0.1637565494,
0.058236558,
-0.3759567142,
-0.1230778396,
0.0085836845,
0.1149686202,
-0.096401386,
0.1582956165,
-0.205109641,
-0.1422... |
https://github.com/huggingface/datasets/issues/871 | terminate called after throwing an instance of 'google::protobuf::FatalException' | Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side.
Maybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.) | Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|█████████████████████████████████████████████████████████████████████████████████████████████... | 39 | terminate called after throwing an instance of 'google::protobuf::FatalException'
Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|█████████... | [
-0.2756949663,
-0.6896211505,
0.1162040234,
0.3686617315,
0.3344204724,
-0.0612297319,
0.3348028064,
0.1536110342,
-0.2083399147,
0.1179819405,
-0.0654010102,
-0.3324612081,
-0.0295324028,
0.4058205187,
-0.1832790077,
-0.2990667224,
0.0189543366,
0.1540885419,
-0.462133199,
-0.... |
https://github.com/huggingface/datasets/issues/871 | terminate called after throwing an instance of 'google::protobuf::FatalException' | closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks | Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|█████████████████████████████████████████████████████████████████████████████████████████████... | 19 | terminate called after throwing an instance of 'google::protobuf::FatalException'
Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|█████████... | [
-0.1825483739,
-0.7557111382,
0.0836405382,
0.5567312837,
0.3312152624,
-0.0441039875,
0.2286124229,
0.2201019526,
-0.3796161115,
0.3128045797,
-0.010040815,
-0.4250835776,
0.0028814189,
0.5060403943,
-0.228791967,
-0.4332331419,
-0.071047537,
0.3094233871,
-0.3874347508,
0.101... |
https://github.com/huggingface/datasets/issues/870 | [Feature Request] Add optional parameter in text loading script to preserve linebreaks | Hi ! Thanks for your message.
Indeed it's a free feature we can add and that can be useful.
If you want to contribute, feel free to open a PR to add it to the text dataset script :) | I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data.
I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great.
But the first time I processed all of ... | 39 | [Feature Request] Add optional parameter in text loading script to preserve linebreaks
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data.
I recently switched over to use the datasets library when my various corpora grew larger than my ... | [
-0.4354713857,
0.2950848937,
0.0567215495,
-0.2441310883,
-0.0972464755,
-0.2072376013,
0.3331261873,
0.1758842766,
0.2980455458,
0.3277401328,
0.503434062,
0.2391880155,
-0.0176375974,
0.1306848377,
-0.0111618098,
-0.1541547924,
-0.1673547626,
0.3164185286,
0.0948441774,
0.094... |
https://github.com/huggingface/datasets/issues/866 | OSCAR from Inria group | PR is already open here : #348
The only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).
As soon as #863 is merged we can start computing them. This will take a bit of time though | ## Adding a Dataset
- **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/).
- **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by la... | 43 | OSCAR from Inria group
## Adding a Dataset
- **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/).
- **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multiling... | [
-0.0789996162,
-0.0689799711,
-0.0829852298,
0.0613857135,
-0.0888233036,
0.094061628,
-0.0122486288,
0.2985755503,
0.0025456205,
0.0863161907,
-0.5969398022,
-0.0512049347,
-0.2450726628,
0.0754725039,
0.1506561786,
-0.3321435153,
0.0520218983,
-0.1897599548,
-0.2447093278,
-0... |
https://github.com/huggingface/datasets/issues/865 | Have Trouble importing `datasets` | I'm sorry, this was a problem with my environment.
Now that I have identified the cause of environmental dependency, I would like to fix it and try it.
Excuse me for making a noise. | I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets.
I cloned the newest version of datasets (master branch), and do `pip install -e .`.
Then, `import datasets` causes the error below.
```
~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in ... | 34 | Have Trouble importing `datasets`
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets.
I cloned the newest version of datasets (master branch), and do `pip install -e .`.
Then, `import datasets` causes the error below.
```
~/workspace/Clone/datasets/... | [
-0.2681196332,
0.1308587939,
0.0091980901,
0.172247842,
0.2753850818,
0.0738353804,
0.2072006166,
0.1833840609,
0.1529002637,
-0.1035957262,
-0.2849371731,
-0.1424153447,
-0.1180941314,
-0.3181896508,
0.0306982324,
0.1511704177,
0.2635428309,
0.3381053507,
-0.3553361893,
-0.265... |
https://github.com/huggingface/datasets/issues/864 | Unable to download cnn_dailymail dataset | Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2 | ### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
-------------------------------------------------------------... | 18 | Unable to download cnn_dailymail dataset
### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
------------------... | [
-0.3053484559,
0.2343732715,
-0.079227142,
0.1706800461,
0.360565573,
0.1738085747,
0.5607902408,
0.2436561435,
-0.1719247848,
0.0751597509,
-0.1179394275,
0.0767753795,
-0.2897081971,
-0.0377530158,
-0.0130564943,
-0.0291595478,
-0.2127257735,
0.1067627445,
-0.1973430067,
0.02... |
https://github.com/huggingface/datasets/issues/864 | Unable to download cnn_dailymail dataset | I couldn't reproduce unfortunately. I tried
```python
from datasets import load_dataset
load_dataset("cnn_dailymail", "3.0.0", download_mode="force_redownload")
```
and it worked fine on both my env (python 3.7.2) and colab (python 3.6.9)
Maybe there was an issue with the google drive download link of the dat... | ### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
-------------------------------------------------------------... | 66 | Unable to download cnn_dailymail dataset
### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
------------------... | [
-0.3053484559,
0.2343732715,
-0.079227142,
0.1706800461,
0.360565573,
0.1738085747,
0.5607902408,
0.2436561435,
-0.1719247848,
0.0751597509,
-0.1179394275,
0.0767753795,
-0.2897081971,
-0.0377530158,
-0.0130564943,
-0.0291595478,
-0.2127257735,
0.1067627445,
-0.1973430067,
0.02... |
https://github.com/huggingface/datasets/issues/864 | Unable to download cnn_dailymail dataset | No, It's working fine now. Very strange. Here are my python and request versions
requests 2.24.0
Python 3.8.2 | ### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
-------------------------------------------------------------... | 18 | Unable to download cnn_dailymail dataset
### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
------------------... | [
-0.3053484559,
0.2343732715,
-0.079227142,
0.1706800461,
0.360565573,
0.1738085747,
0.5607902408,
0.2436561435,
-0.1719247848,
0.0751597509,
-0.1179394275,
0.0767753795,
-0.2897081971,
-0.0377530158,
-0.0130564943,
-0.0291595478,
-0.2127257735,
0.1067627445,
-0.1973430067,
0.02... |
https://github.com/huggingface/datasets/issues/861 | Possible Bug: Small training/dataset file creates gigantic output | The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why th... | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | 58 | Possible Bug: Small training/dataset file creates gigantic output
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling... | [
-0.1432340741,
-0.3876415789,
0.1133137122,
0.3227744997,
0.592713356,
-0.1981911212,
0.287423104,
0.3909299374,
-0.3091714382,
0.1720578223,
-0.0339281149,
-0.1797882766,
-0.3273913264,
0.1897951066,
0.1685712487,
-0.0655539408,
0.1466005594,
0.1161786541,
-0.0373178869,
-0.25... |
https://github.com/huggingface/datasets/issues/861 | Possible Bug: Small training/dataset file creates gigantic output | First I think we should disable padding in the dataset processing and let the data collator do it.
Then I'm wondering if you need attention_mask and token_type_ids at this point ?
Finally we can also specify the output feature types at this line https://github.com/huggingface/transformers/blob/master/examples/lan... | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | 90 | Possible Bug: Small training/dataset file creates gigantic output
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling... | [
-0.2299050093,
-0.3667339087,
0.1243237406,
0.2771855295,
0.603097856,
-0.1695277691,
0.312145263,
0.3734355569,
-0.2599174082,
0.1522066444,
-0.0589236319,
-0.1322189569,
-0.2868516743,
0.2487873584,
0.1246002465,
-0.1173475608,
0.1013760939,
0.1097861826,
-0.0822898671,
-0.25... |
https://github.com/huggingface/datasets/issues/861 | Possible Bug: Small training/dataset file creates gigantic output | > First I think we should disable padding in the dataset processing and let the data collator do it.
No, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching.
For the ... | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | 91 | Possible Bug: Small training/dataset file creates gigantic output
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling... | [
-0.2449232936,
-0.4394874573,
0.1173569039,
0.3189444244,
0.5710245967,
-0.1765768677,
0.2565916777,
0.4291754961,
-0.3390788436,
0.1573357582,
-0.0675727352,
-0.2011723667,
-0.308722645,
0.1915092468,
0.1266336888,
-0.1478894055,
0.0698193461,
0.1731738597,
-0.1252351403,
-0.2... |
https://github.com/huggingface/datasets/issues/861 | Possible Bug: Small training/dataset file creates gigantic output | Oh yes right..
Do you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ? | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | 25 | Possible Bug: Small training/dataset file creates gigantic output
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling... | [
-0.1584592611,
-0.3227499723,
0.1142650545,
0.2783169448,
0.6050812006,
-0.0893944949,
0.3665045202,
0.3757137656,
-0.2504403591,
0.1644323021,
-0.0610659346,
-0.0809402689,
-0.3737187386,
0.1472707242,
0.1473816931,
0.0324399546,
0.1391413808,
0.1354866326,
-0.077372916,
-0.24... |
https://github.com/huggingface/datasets/issues/861 | Possible Bug: Small training/dataset file creates gigantic output | I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ).
If it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however! | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | 55 | Possible Bug: Small training/dataset file creates gigantic output
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling... | [
-0.2022947371,
-0.292021662,
0.1189169437,
0.2300963253,
0.6037492156,
-0.1722532958,
0.3156900108,
0.389885366,
-0.2658644915,
0.1825165749,
-0.0329228416,
-0.1477787942,
-0.3427738249,
0.2524910867,
0.1107930467,
-0.0713135824,
0.1064803898,
0.1568872184,
-0.0434263609,
-0.22... |
https://github.com/huggingface/datasets/issues/861 | Possible Bug: Small training/dataset file creates gigantic output | > Hey guys,
>
> I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small ... | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | 273 | Possible Bug: Small training/dataset file creates gigantic output
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling... | [
-0.1276513636,
-0.4218365848,
0.1068673581,
0.2943579555,
0.6427577734,
-0.1283719391,
0.3398815095,
0.4012147784,
-0.2470449209,
0.1549628228,
-0.0381341912,
-0.1218296364,
-0.2987715006,
0.2704770267,
0.1444564909,
-0.084858492,
0.0800592378,
0.1567350328,
-0.0432324149,
-0.3... |
https://github.com/huggingface/datasets/issues/861 | Possible Bug: Small training/dataset file creates gigantic output | Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5.
Feel free to update your Datasets version
```shell
pip install -U datasets
```
and see if it better suits your needs. | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | 34 | Possible Bug: Small training/dataset file creates gigantic output
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling... | [
-0.2338341177,
-0.3642717302,
0.1098739579,
0.3271674216,
0.5802887678,
-0.1227115989,
0.2948493063,
0.4052776396,
-0.1822133213,
0.1775165796,
-0.0482909121,
-0.1247081012,
-0.3384364843,
0.2299036682,
0.0976039469,
-0.0641688034,
0.1237756759,
0.153372094,
-0.1044933945,
-0.2... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 20 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error).
I searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the data are a cleaned version of the original ones though)
Should we consider replacing the old urls with these ... | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 59 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ... | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 20 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this. | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 38 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 20 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much. | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 22 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | Hi
I am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 46 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/854 | wmt16 does not download | It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen. | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | 27 | wmt16 does not download
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot... | [
-0.444955796,
-0.422475934,
-0.0815042406,
0.4179416001,
0.5046889186,
0.1244978011,
0.128145501,
0.213996619,
0.2786701918,
0.0013543622,
0.0727131665,
-0.1104613468,
-0.2351616621,
0.0972074568,
0.2837480903,
-0.040581163,
-0.123862505,
-0.0028828436,
-0.4859333932,
0.0390386... |
https://github.com/huggingface/datasets/issues/853 | concatenate_datasets support axis=0 or 1 ? | Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.
Currently to add more columns to a dataset, one must use `map`.
What you can do is somehting like this:
```python
# suppose you have datasets d1, d2, d3
def add_columns(example, ind... | I want to achieve the following result

| 58 | concatenate_datasets support axis=0 or 1 ?
I want to achieve the following result

Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate... | [
-0.4226087928,
-0.0862601474,
-0.154598996,
0.0781619847,
0.1073096544,
0.4110479057,
0.3115075529,
0.4648511708,
0.1183014438,
0.1705960035,
-0.1758234203,
0.3464903533,
-0.0750730559,
0.5408207178,
-0.2195976228,
-0.2845953107,
0.1583359689,
0.523165822,
-0.5183836222,
0.0794... |
https://github.com/huggingface/datasets/issues/853 | concatenate_datasets support axis=0 or 1 ? | That's not really difficult to add, though, no?
I think it can be done without copy.
Maybe let's add it to the roadmap? | I want to achieve the following result

| 23 | concatenate_datasets support axis=0 or 1 ?
I want to achieve the following result

That's not really difficult to add, though, no?
I think it can be done without copy.
Maybe let's add it to the roadmap... | [
-0.5632334352,
0.1566244513,
-0.1052775979,
-0.0830111057,
0.0874816477,
0.1624971479,
0.3028234243,
0.4194338918,
-0.2249420285,
0.3868970871,
-0.1537488997,
0.2858175635,
-0.1336852908,
0.4265961349,
-0.3774044812,
-0.1137893796,
0.1690776944,
0.6831738353,
-0.4871606231,
-0.... |
https://github.com/huggingface/datasets/issues/853 | concatenate_datasets support axis=0 or 1 ? | Actually it's doable but requires to update the `Dataset._data_files` schema to support this.
I'm re-opening this since we may want to add this in the future | I want to achieve the following result

| 26 | concatenate_datasets support axis=0 or 1 ?
I want to achieve the following result

Actually it's doable but requires to update the `Dataset._data_files` schema to support this.
I'm re-opening this since... | [
-0.6200823188,
0.1132395416,
-0.1497739553,
0.1142357886,
0.1257235706,
0.203242749,
0.3956645131,
0.395901531,
-0.0039552553,
0.2456054986,
-0.2038998008,
0.3826509118,
-0.0755911693,
0.5006219149,
-0.4717247784,
-0.0797578841,
0.1835413128,
0.7113133073,
-0.5424259305,
-0.011... |
https://github.com/huggingface/datasets/issues/853 | concatenate_datasets support axis=0 or 1 ? | Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows. | I want to achieve the following result

| 40 | concatenate_datasets support axis=0 or 1 ?
I want to achieve the following result

Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `conca... | [
-0.3484050333,
-0.0351736099,
-0.1219938993,
0.0604967214,
0.1725976467,
0.0918550342,
0.3840518296,
0.3609428108,
0.0169329289,
0.3619816005,
0.0030050878,
0.4973300993,
-0.1567095369,
0.5172731876,
-0.3565700352,
-0.1661899537,
0.1776000112,
0.7238047123,
-0.4239692092,
0.011... |
https://github.com/huggingface/datasets/issues/853 | concatenate_datasets support axis=0 or 1 ? | Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !
Here is a few things about the current implementation:
- A dataset object is a wrapper of one `pyarrow.Table` that contains the data
- Pyarrow offers an API that allows to transform Table objects. For example there are... | I want to achieve the following result

| 230 | concatenate_datasets support axis=0 or 1 ?
I want to achieve the following result

Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !
Here is a few things... | [
-0.5330204964,
0.3033459485,
-0.0301636867,
0.0675024912,
-0.0625750422,
-0.0582221225,
0.2094468921,
0.4255704284,
-0.0745845661,
0.1827807128,
-0.1869084686,
0.7275899649,
-0.1599381715,
0.572449863,
-0.2195646316,
-0.2587186992,
0.1644585133,
0.6102406979,
-0.5832623839,
0.2... |
https://github.com/huggingface/datasets/issues/853 | concatenate_datasets support axis=0 or 1 ? | @lhoestq, we have two Pull Requests to implement:
- Dataset.add_item: #1870
- Dataset.add_column: #2145
which add a single row or column, repectively.
The request here is to implement the concatenation of *multiple* rows/columns. Am I right?
We should agree on the API:
- `concatenate_datasets` with `axis`?
-... | I want to achieve the following result

| 51 | concatenate_datasets support axis=0 or 1 ?
I want to achieve the following result

@lhoestq, we have two Pull Requests to implement:
- Dataset.add_item: #1870
- Dataset.add_column: #2145
which add a s... | [
-0.265938133,
-0.0951727778,
-0.0820606425,
0.0408309437,
-0.2382575423,
0.1075186804,
0.293002367,
0.2532635033,
0.0705211014,
0.1636557728,
-0.0048819561,
0.3495404124,
-0.0274939276,
0.5060871243,
-0.0220798366,
-0.0006613379,
0.2938613892,
0.5536049008,
-0.2584513128,
0.000... |
https://github.com/huggingface/datasets/issues/853 | concatenate_datasets support axis=0 or 1 ? | For the API, I like `concatenate_datasets` with `axis` personally :)
From a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (a... | I want to achieve the following result

| 158 | concatenate_datasets support axis=0 or 1 ?
I want to achieve the following result

For the API, I like `concatenate_datasets` with `axis` personally :)
From a list of `Dataset` objects, it would concate... | [
-0.3482718766,
0.0645765066,
-0.0892974809,
0.1464073956,
-0.004705885,
0.068712309,
0.1781942695,
0.4301660061,
-0.0550013371,
0.1688196063,
-0.0407027975,
0.5322627425,
-0.1796880364,
0.4457179308,
-0.1766490489,
-0.1298692077,
0.2669723332,
0.6772674322,
-0.5987275243,
0.152... |
https://github.com/huggingface/datasets/issues/849 | Load amazon dataset | Thanks for reporting !
We plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls.
Also I think the bullet points formatting has been fixed | Hi,
I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset.
Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews)
```
from datasets import load_dataset
dataset = load_dataset("amaz... | 34 | Load amazon dataset
Hi,
I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset.
Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews)
```
from datasets import load_dataset
datase... | [
-0.0789319128,
-0.1892747879,
-0.2111491859,
0.5182409286,
0.1775327623,
0.2812185884,
0.2807132006,
0.1095820293,
0.1075351685,
-0.2891573906,
-0.1656706333,
0.1336099952,
0.3731779754,
0.3808054924,
0.1173838153,
-0.0786428526,
-0.0114681209,
0.0120944893,
-0.0104108602,
0.03... |
https://github.com/huggingface/datasets/issues/848 | Error when concatenate_datasets | As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.
The indices mapping correspond to a mapping on top of the data table that is used... | Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
--------------... | 172 | Error when concatenate_datasets
Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported Val... | [
-0.0911898687,
-0.1483127028,
-0.0447486341,
0.6809358597,
0.1068756282,
0.2218792588,
0.3129053414,
0.2291088849,
-0.1234837919,
0.1246046498,
-0.1083223149,
0.2736026645,
-0.0816893578,
-0.1234090701,
-0.3013663292,
-0.0974016786,
0.1984925866,
0.1485704184,
-0.5059520602,
-0... |
https://github.com/huggingface/datasets/issues/848 | Error when concatenate_datasets | > As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.
>
> The indices mapping correspond to a mapping on top of the data table that i... | Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
--------------... | 184 | Error when concatenate_datasets
Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported Val... | [
-0.0911898687,
-0.1483127028,
-0.0447486341,
0.6809358597,
0.1068756282,
0.2218792588,
0.3129053414,
0.2291088849,
-0.1234837919,
0.1246046498,
-0.1083223149,
0.2736026645,
-0.0816893578,
-0.1234090701,
-0.3013663292,
-0.0974016786,
0.1984925866,
0.1485704184,
-0.5059520602,
-0... |
https://github.com/huggingface/datasets/issues/848 | Error when concatenate_datasets | @lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it) | Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
--------------... | 31 | Error when concatenate_datasets
Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported Val... | [
-0.0911898687,
-0.1483127028,
-0.0447486341,
0.6809358597,
0.1068756282,
0.2218792588,
0.3129053414,
0.2291088849,
-0.1234837919,
0.1246046498,
-0.1083223149,
0.2736026645,
-0.0816893578,
-0.1234090701,
-0.3013663292,
-0.0974016786,
0.1984925866,
0.1485704184,
-0.5059520602,
-0... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | It looks like an issue with wandb/tqdm here.
We're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.
Could you make a minimal script to reproduce or a google colab ? | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 46 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | hi facing the same issue here -
`AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit
stream.write(msg)
File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", l... | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 293 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | It looks like this warning :
"Truncation was not explicitly activated but max_length is provided a specific value, "
is not handled well by wandb.
The error occurs when calling the tokenizer.
Maybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ?
Otherwise I don't know... | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 80 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | I'm having a similar issue but when I try to do multiprocessing with the `DataLoader`
Code to reproduce:
```
from datasets import load_dataset
book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')
book_corpus = book_corpus.map(encode, batched=True... | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 383 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()` | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 29 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | Yep this time this is a warning from pytorch that causes wandb to not work properly.
Could this by a wandb issue ? | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 23 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | Hi @timothyjlaurent @gaceladri
If you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ?
This issue might be related to https://github.com/huggingface/transformers/issues/9623 | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 30 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/847 | multiprocessing in dataset map "can only test a child process" | I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well. | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | 45 | multiprocessing in dataset map "can only test a child process"
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ... | [
-0.3956994712,
0.0156291481,
-0.1538685113,
-0.1850298494,
0.1417974979,
-0.0578565411,
0.5068991184,
0.3400262594,
-0.0834039673,
0.1681768894,
-0.0184274632,
0.3370631635,
0.0475012511,
0.0918334797,
-0.2264436632,
0.1021468639,
-0.0123606632,
0.0946919024,
0.1290111095,
0.04... |
https://github.com/huggingface/datasets/issues/846 | Add HoVer multi-hop fact verification dataset | Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies? | ## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** There are still few multi-hop information extraction... | 23 | Add HoVer multi-hop fact verification dataset
## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** Ther... | [
-0.2272991091,
-0.0823318362,
0.0003385305,
0.1499358863,
-0.4059507847,
-0.034600202,
0.1919763833,
0.1057021022,
0.096178256,
-0.1447628587,
-0.0471815877,
-0.1035465151,
-0.263531208,
0.2344667763,
0.2132850885,
-0.05957314,
-0.1592914015,
-0.1947901398,
0.1115607321,
0.0093... |
https://github.com/huggingface/datasets/issues/846 | Add HoVer multi-hop fact verification dataset | Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here:
https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md | ## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** There are still few multi-hop information extraction... | 39 | Add HoVer multi-hop fact verification dataset
## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** Ther... | [
-0.2867435515,
-0.2775808275,
-0.08202371,
0.0229687002,
-0.2790859044,
-0.1241934896,
0.1645182967,
0.1147086918,
0.0981637388,
0.1487331837,
-0.0963038281,
0.0598271005,
-0.0950222015,
0.2981518805,
0.2122937143,
-0.1031538621,
-0.1313483566,
-0.0484598055,
-0.0788759589,
0.0... |
https://github.com/huggingface/datasets/issues/843 | use_custom_baseline still produces errors for bertscore | Thanks for reporting ! That's a bug indeed
If you want to contribute, feel free to fix this issue and open a PR :) | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"... | 24 | use_custom_baseline still produces errors for bertscore
`metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_c... | [
-0.0237091891,
0.1297199577,
0.1280902922,
0.0422289073,
0.1812679023,
-0.0610473789,
0.4962112606,
0.054329928,
-0.0781160146,
0.1093735248,
0.0923892856,
0.2173015922,
-0.2150027007,
-0.0573221184,
-0.3110678792,
-0.2098555714,
-0.1186008304,
0.254781574,
0.0980245471,
-0.063... |
https://github.com/huggingface/datasets/issues/843 | use_custom_baseline still produces errors for bertscore | This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem. | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"... | 27 | use_custom_baseline still produces errors for bertscore
`metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_c... | [
-0.0475403406,
0.1264258772,
0.1010242552,
0.0786018074,
0.1025783718,
-0.112329483,
0.3708123267,
0.0062378836,
-0.0859040916,
0.0965000167,
0.0731695071,
0.3271627724,
-0.16986157,
-0.0304797832,
-0.3144401908,
-0.2213499248,
-0.0507827885,
0.2818050385,
0.0639782399,
-0.0864... |
https://github.com/huggingface/datasets/issues/843 | use_custom_baseline still produces errors for bertscore | Hello everyone,
I think the problem is not solved:
```
from datasets import load_metric
metric=load_metric('bertscore')
metric.compute(
predictions=predictions,
references=references,
lang='fr',
rescale_with_baseline=True
)
TypeError: get_hash() missing 2 required positional arguments: ... | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"... | 42 | use_custom_baseline still produces errors for bertscore
`metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_c... | [
0.0139923375,
0.0406958759,
0.0927704051,
0.0760170594,
0.1578615159,
-0.0484315008,
0.4595751762,
0.0406526029,
-0.0702816546,
0.0988220796,
0.0572407544,
0.2482234687,
-0.2429826409,
-0.0912587345,
-0.313735038,
-0.2044168264,
-0.1234870255,
0.2484202385,
0.1250582933,
-0.068... |
https://github.com/huggingface/datasets/issues/843 | use_custom_baseline still produces errors for bertscore | Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :)
In the meantime please use an older version of `bert_score` | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"... | 30 | use_custom_baseline still produces errors for bertscore
`metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_c... | [
-0.0653989017,
0.1055710688,
0.1252273917,
0.0791327208,
0.1758201867,
-0.1042810902,
0.4214694798,
0.0186610036,
-0.0667725429,
0.1184604913,
0.030669257,
0.2369850576,
-0.1979216486,
0.0304783043,
-0.3029206395,
-0.2030735612,
-0.1065387353,
0.2290029377,
0.0602316037,
-0.001... |
https://github.com/huggingface/datasets/issues/842 | How to enable `.map()` pre-processing pipelines to support multi-node parallelism? | Right now multiprocessing only runs on single node.
However it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on... | Hi,
Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ... | 76 | How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Hi,
Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel... | [
-0.3369054496,
-0.3196842968,
-0.1686350852,
-0.0785501525,
-0.1322285384,
0.0334653482,
0.0507948473,
-0.0447822921,
0.238920331,
0.2358292937,
0.254430145,
0.5422919393,
-0.157851994,
0.3227919042,
0.0088176606,
-0.4016122818,
0.0359967761,
-0.0369439833,
-0.0063833212,
0.268... |
https://github.com/huggingface/datasets/issues/841 | Can not reuse datasets already downloaded | It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py'
Where and how to assign this ```wikipedia.py``` after I manually download it ? | Hello,
I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).
I successfully downloaded and reuse the wikipedia datasets in a frontal node.
When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but... | 19 | Can not reuse datasets already downloaded
Hello,
I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).
I successfully downloaded and reuse the wikipedia datasets in a frontal node.
When I connect to the gpu node, I supposed to... | [
-0.1548725963,
-0.2376585603,
-0.0935402066,
0.2473086715,
0.3116976619,
0.0938447416,
0.202871412,
0.0549307056,
0.4361796081,
-0.1159730181,
0.0571259595,
-0.0696767271,
0.4684702754,
-0.0135198068,
0.0379944183,
-0.0326448567,
-0.1300400794,
-0.1004903615,
0.0436399393,
-0.1... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | Which version of pyarrow do you have ? Could you try to update pyarrow and try again ? | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 18 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)
I am working with Python 3.8.5 | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 21 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612
The problem is in arrow when the column data contains long strings.
Any ideas on how to bypass this? | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 29 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).
In the meantime you can specify yourself the ... | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 56 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | This did help to load the data. But the problem now is that I get:
ArrowInvalid: CSV parse error: Expected 5 columns, got 187
It seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow
But I got a similar error, again it loaded fine in pandas so I am no... | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 66 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error. | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 32 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | > We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).
>
> In the meantime you can specify yourself ... | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 82 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pan... | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | 44 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading a... | [
-0.2411834002,
-0.319865644,
-0.0681557357,
0.4316073358,
0.3625251949,
0.019154856,
0.5210206509,
0.4377133548,
0.2949125767,
0.0327751897,
0.0169625934,
-0.1187047064,
0.0905916989,
0.1802895069,
0.0350660197,
0.113009423,
0.0730285496,
0.3988213837,
0.0317205191,
0.069972500... |
https://github.com/huggingface/datasets/issues/835 | Wikipedia postprocessing | Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.
As an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool | Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir... | 38 | Wikipedia postprocessing
Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, ge... | [
-0.0376304314,
0.116000846,
-0.1909410059,
0.4500379562,
0.2954647541,
-0.2197374851,
0.3712628484,
0.2625180185,
-0.0028850581,
0.0670773014,
0.1637322903,
0.3684880733,
0.0403136909,
-0.0634338334,
-0.0525209792,
-0.2800580263,
0.0010120387,
0.1088695079,
-0.2758776844,
-0.21... |
https://github.com/huggingface/datasets/issues/834 | [GEM] add WikiLingua cross-lingual abstractive summarization dataset | Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl? | ## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.
- **Paper:** h... | 48 | [GEM] add WikiLingua cross-lingual abstractive summarization dataset
## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images th... | [
-0.2052895129,
0.319988668,
0.0722147971,
0.5395572782,
0.1300372481,
0.219380632,
0.1278001666,
-0.1864243746,
-0.0289536808,
-0.0699261501,
0.2743755281,
0.0008882004,
-0.4298545718,
0.2057962716,
0.30813393,
-0.2922176123,
0.0054488541,
-0.2033133954,
0.0205825884,
-0.182049... |
https://github.com/huggingface/datasets/issues/834 | [GEM] add WikiLingua cross-lingual abstractive summarization dataset | Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem)
You can use it for example to load the French to English translation with:
```python
from datasets import load_dataset
wikilingua = load_dataset("gem", "wiki_lingua_french_fr")
```
Clo... | ## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.
- **Paper:** h... | 42 | [GEM] add WikiLingua cross-lingual abstractive summarization dataset
## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images th... | [
-0.2764140666,
0.0026881287,
-0.1001023874,
0.3107096553,
-0.1134150922,
0.1402345449,
0.010671787,
0.2736698985,
-0.0497234054,
0.1425441206,
-0.066612497,
0.2388452888,
0.0332167447,
0.3290539086,
0.1007776186,
-0.5720510483,
0.0124260718,
0.0506395921,
-0.1894690394,
-0.0938... |
https://github.com/huggingface/datasets/issues/827 | [GEM] MultiWOZ dialogue dataset | Hi @yjernite can I help in adding this dataset?
I am excited about this because this will be my first contribution to the datasets library as well as to hugginface. | ## Adding a Dataset
- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)
- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user... | 30 | [GEM] MultiWOZ dialogue dataset
## Adding a Dataset
- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)
- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – ther... | [
-0.0495597422,
-0.1261753589,
0.0484425649,
0.5216320753,
0.0280472375,
0.2442049384,
0.225789696,
-0.1295135468,
-0.0764305741,
-0.0352302305,
-0.3937832713,
0.0283757411,
-0.4157283604,
0.3712356985,
0.275316149,
-0.4878650308,
-0.0401313715,
-0.1847280264,
-0.0573809482,
-0.... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going ... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 72 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.6300708055,
0.2058907598,
-0.0102496855,
0.1388328224,
0.1934467554,
-0.1737716198,
0.5314205289,
0.083163783,
0.3500736356,
0.236711964,
0.1306475848,
-0.0646166056,
-0.1149485931,
0.4892583489,
-0.061324019,
-0.0358767994,
-0.1664478332,
-0.0370641537,
-0.2075222731,
0.166... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine.
@mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could yo... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 57 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4731808305,
0.2457837164,
-0.0126307793,
0.1412143409,
0.2833553851,
-0.1470210701,
0.6012274623,
0.0127758998,
0.2687810063,
0.1312779933,
-0.02403678,
-0.0239853263,
-0.0246622693,
0.4021068513,
-0.0912402794,
-0.1021380424,
-0.172233507,
0.0388773903,
-0.1540049464,
0.132... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | here is my way to load a dataset offline, but it **requires** an online machine
1. (online machine)
```
import datasets
data = datasets.load_dataset(...)
data.save_to_disk(/YOUR/DATASET/DIR)
```
2. copy the dir from online to the offline machine
3. (offline machine)
```
import datasets
data = datasets.load_f... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 47 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4902576208,
0.228896305,
-0.0332209729,
0.1388731301,
0.2363724262,
-0.0891152248,
0.5481942892,
0.0673726052,
0.2955959141,
0.2450511009,
-0.0101742186,
-0.0694927797,
0.0276257005,
0.3827281892,
-0.1057182997,
-0.0218465738,
-0.1530393362,
0.0536615513,
-0.1772286445,
0.08... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | > here is my way to load a dataset offline, but it **requires** an online machine
>
> 1. (online machine)
>
> ```
>
> import datasets
>
> data = datasets.load_dataset(...)
>
> data.save_to_disk(/YOUR/DATASET/DIR)
>
> ```
>
> 2. copy the dir from online to the offline machine
>
> 3. (offline machine)
>
> ```
> ... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 76 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.499260217,
0.2269978076,
-0.0324691869,
0.1418720782,
0.2369506806,
-0.1029173434,
0.5442941189,
0.0744111612,
0.2753628194,
0.2442881614,
-0.0088338153,
-0.0665395111,
0.0280857626,
0.3756226599,
-0.0989000201,
-0.041955404,
-0.1623332649,
0.0563547723,
-0.1830360591,
0.107... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | I opened a PR that allows to reload modules that have already been loaded once even if there's no internet.
Let me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :)
I already note the "freeze" modules option, to prevent local modules updates. It would be a ... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 179 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4716481566,
0.2902269959,
-0.047672078,
0.1334421188,
0.2106888741,
-0.212225154,
0.5858257413,
0.0534167141,
0.283341229,
0.1841128469,
0.0310595687,
0.0326385386,
0.0011812204,
0.3333256841,
-0.072144039,
-0.058541812,
-0.1905425787,
-0.0051049632,
-0.1658581048,
0.0827915... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)
You can now use them offline
```python
datasets = load_dataset('text', data_files=data_files)
```
We'll do a new release soon | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 38 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4490852952,
0.2095066905,
-0.0598192662,
0.1293503046,
0.2621928155,
-0.1312853992,
0.5469647646,
0.0949263871,
0.3154381216,
0.2294312418,
0.0510409586,
-0.0031590078,
0.0479417928,
0.4036760032,
-0.1032219231,
-0.1191782206,
-0.1096812934,
0.080623731,
-0.1709269285,
0.092... |
https://github.com/huggingface/datasets/issues/823 | how processing in batch works in datasets | Hi I don’t think this is a request for a dataset like you labeled it.
I also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks. | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | 53 | how processing in batch works in datasets
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
... | [
-0.5023792982,
-0.1743950695,
-0.2295022011,
0.1301573068,
0.1780351549,
0.031794928,
0.2955615819,
0.171304509,
-0.1484706998,
0.1780764759,
0.0091214012,
0.1247810051,
0.0641202405,
0.245623365,
-0.056165114,
-0.0358915068,
-0.0745974332,
0.0483772829,
0.1096094027,
0.0072344... |
https://github.com/huggingface/datasets/issues/823 | how processing in batch works in datasets | Hi Thomas,
what I do not get from documentation is that why when you set batched=True,
this is processed in batch, while data is not divided to batched
beforehand, basically this is a question on the documentation and I do not
get the batched=True, but sure, if you think this is more appropriate in
forum I will post it... | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | 167 | how processing in batch works in datasets
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
... | [
-0.5023792982,
-0.1743950695,
-0.2295022011,
0.1301573068,
0.1780351549,
0.031794928,
0.2955615819,
0.171304509,
-0.1484706998,
0.1780764759,
0.0091214012,
0.1247810051,
0.0641202405,
0.245623365,
-0.056165114,
-0.0358915068,
-0.0745974332,
0.0483772829,
0.1096094027,
0.0072344... |
https://github.com/huggingface/datasets/issues/823 | how processing in batch works in datasets | Yes the forum is perfect for that. You can post in the `datasets` section.
Thanks a lot! | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | 17 | how processing in batch works in datasets
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
... | [
-0.5023792982,
-0.1743950695,
-0.2295022011,
0.1301573068,
0.1780351549,
0.031794928,
0.2955615819,
0.171304509,
-0.1484706998,
0.1780764759,
0.0091214012,
0.1247810051,
0.0641202405,
0.245623365,
-0.056165114,
-0.0358915068,
-0.0745974332,
0.0483772829,
0.1096094027,
0.0072344... |
https://github.com/huggingface/datasets/issues/822 | datasets freezes | Pytorch is unable to convert strings to tensors unfortunately.
You can use `set_format(type="torch")` on columns that can be converted to tensors, such as token ids.
This makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_datase... | 52 | datasets freezes
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
da... | [
-0.1619094461,
-0.3347921073,
-0.0465798452,
0.5367453694,
0.3581035733,
0.2125210911,
0.5072590113,
0.3903237879,
-0.0151282242,
0.120722048,
-0.21651797,
0.2929007411,
-0.1185356975,
-0.1630847901,
-0.1131298915,
-0.5349529386,
0.115431197,
-0.0951829329,
-0.3404096663,
0.099... |
https://github.com/huggingface/datasets/issues/816 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | To show the issue:
```
python -c "from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))"
```
doesn't always return the same ouput since `globs` is a dictionary with "a" and "len" as keys but sometimes not in the same order | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementati... | 43 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not d... | [
-0.0819264129,
-0.0495191179,
-0.0879311562,
0.1216145232,
0.0892917588,
-0.1563625038,
0.1671693176,
0.243326202,
-0.0996845365,
0.0803644955,
-0.0827870071,
0.2674162686,
-0.1859902591,
-0.268915683,
0.0077192858,
0.3598431051,
-0.0797576532,
0.0291420743,
-0.437990427,
-0.27... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hello !
Could you give more details ?
If you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use
```python
for example in dataset:
# do something
```
If you want to iter through several datasets you can first concatenate them
```python
from data... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 67 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hello !
Could you give more details ?
If you mean iter through one dat... | [
-0.2620127797,
-0.2916283607,
-0.1804152727,
0.1139709949,
0.0606510378,
-0.0265372507,
0.2819752693,
0.168773964,
0.1109018102,
0.0845515579,
0.1554596275,
0.2481859177,
-0.3554363251,
0.3489526808,
0.1711000055,
-0.0656194091,
0.0205350388,
0.136970669,
-0.6233145595,
-0.0821... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hi Huggingface/Datasets team,
I want to use the datasets inside Seq2SeqDataset here
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py
and there I need to return back each line from the datasets and I am not
sure how to access each line and implement this?
It seems it also has get_item at... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 185 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hi Huggingface/Datasets team,
I want to use the datasets inside Seq2SeqDat... | [
-0.0225436985,
-0.3792023063,
-0.0717195347,
0.379314661,
0.0794591978,
-0.1549821794,
0.2763755023,
0.0828171968,
0.0711624995,
-0.1428880095,
-0.0643623471,
0.0111190565,
-0.2454966158,
0.3429044187,
0.1579364836,
-0.0702994391,
-0.1000305712,
0.007463037,
-0.7639589906,
-0.0... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | could you tell me please if datasets also has __getitem__ any idea on how
to integrate it with Seq2SeqDataset is appreciated thanks
On Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com>
wrote:
> Hi Huggingface/Datasets team,
> I want to use the datasets inside Seq2SeqDataset here
> https://github... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 236 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
could you tell me please if datasets also has __getitem__ any idea on how
... | [
-0.1425884068,
-0.3471776843,
-0.1183144748,
0.351855129,
0.0803463608,
-0.0877769589,
0.2236452103,
0.2374292165,
0.033236526,
-0.1268833727,
-0.038506709,
0.0500972159,
-0.267303735,
0.3630907536,
0.0759755,
-0.0330849811,
-0.0817345604,
-0.0110199088,
-0.7120335698,
-0.05418... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | `datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column.
We've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files.
However as soon as you have a `datasets.Dataset` with columns ... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 76 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
`datasets.Dataset` objects implement indeed `__getitem__`. It returns a di... | [
-0.1060157493,
-0.0653585047,
-0.0744618848,
0.2728960514,
0.0212029554,
-0.0095436303,
0.2350495756,
0.2607482076,
0.0178501308,
-0.1811227351,
0.2007771134,
0.2117640376,
-0.3875654936,
0.1963475496,
0.0909051672,
0.0099151097,
-0.1221983805,
0.017718764,
-0.6584700942,
-0.13... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hi
I am sorry for asking it multiple times but I am not getting the dataloader
type, could you confirm if the dataset library returns back an iterable
type dataloader or a mapping type one where one has access to __getitem__,
in the former case, one can iterate with __iter__, and how I can configure
it to return the da... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 217 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hi
I am sorry for asking it multiple times but I am not getting the datalo... | [
-0.1699586809,
-0.1714066267,
-0.0689312965,
0.292388916,
0.1009494215,
-0.0845248848,
0.3338167071,
0.2466904223,
0.1797013581,
-0.1265342534,
0.0680098459,
0.1442487538,
-0.3788858056,
0.4128890038,
0.1700597107,
-0.0318090804,
-0.0919880718,
0.0115780635,
-0.6868848801,
-0.1... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | `datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__`
For example you can do
```python
for example in dataset:
# do something
```
or
```python
for i in range(len(dataset)):
example = dataset[i]
# do something
```
When you do that, one and only ... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 57 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
`datasets.Dataset` objects are both iterative and mapping types: it has bo... | [
-0.1967026293,
-0.4120612442,
-0.1522585303,
0.1939140111,
0.0503937863,
0.0091605261,
0.2521877289,
0.0610390194,
0.1378219873,
-0.0013327032,
0.2277509421,
0.2708900869,
-0.4278081954,
0.1769475043,
0.2024409771,
-0.0889549628,
0.01298285,
0.0472291932,
-0.6187961102,
-0.0078... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hi there,
Here is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 113 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hi there,
Here is what I am trying, this is not working for me in map-st... | [
-0.3016708493,
-0.391623646,
-0.1244997978,
0.4045021534,
0.1218657345,
0.0815263838,
0.234580487,
0.1949106604,
-0.0445439443,
-0.0258444548,
0.0440254696,
0.3346887231,
-0.3416886032,
0.0193575341,
-0.0354218781,
-0.1522007287,
-0.1350664943,
-0.1091264933,
-0.5712971687,
-0.... |
https://github.com/huggingface/datasets/issues/813 | How to implement DistributedSampler with datasets | Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using d... | 40 | How to implement DistributedSampler with datasets
Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me ho... | [
-0.176231131,
-0.2129825652,
0.041827891,
0.1458269805,
0.2960067689,
-0.2145046592,
0.2066294253,
-0.0596599169,
0.0736608207,
0.3631319702,
-0.1494353265,
0.2770125568,
-0.3263443708,
0.1506610811,
-0.0292026736,
-0.5932828188,
-0.0871203169,
0.2308089286,
-0.2494452745,
-0.0... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | Hi ! Thanks for reporting :)
I agree these one should be hidden when the logging level is warning, we'll fix that | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 22 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.0939555764,
-0.1382926702,
-0.1109822839,
0.1179055125,
0.3646458983,
0.1386170089,
0.3069938421,
0.5121555924,
0.1688757241,
-0.010042835,
0.0955122933,
0.0601682849,
-0.4101091027,
0.0572798997,
-0.2360522151,
0.2184815556,
0.0992702842,
-0.117720522,
-0.2310966551,
-0.138... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | +1, the amount of logging is excessive.
Most of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)
```
I1109 21:26:01.742688 139785006901056 file... | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 145 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.0745944753,
-0.0315370411,
-0.0513727218,
0.1314179599,
0.284470588,
0.2179288417,
0.2618090212,
0.4838850796,
0.0517459847,
-0.1740336418,
0.1080990657,
0.0136713078,
-0.4307381809,
-0.0791789964,
-0.1534983814,
0.1896949559,
0.0866655633,
-0.2138257772,
-0.2665444911,
-0.0... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.
Also `set_verbosity_warning` does take into account these logs now.
Can you try to update the lib ?
```
pip install --upgrade datasets
``` | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 46 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.3842003345,
-0.1210567653,
-0.0782301724,
0.1213070154,
0.3027054965,
0.0185210798,
0.195324719,
0.3992017806,
0.2092475295,
-0.0036373115,
0.0869948938,
0.2293004543,
-0.3797933757,
-0.0543083698,
-0.1946720779,
0.1247085109,
0.0595848411,
-0.0120905591,
-0.296502471,
-0.05... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?
I'm still using 1.13 version datasets. | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 28 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.1845475137,
-0.0496456586,
-0.0627771392,
0.1152711511,
0.3269052207,
0.2235253006,
0.2482611537,
0.528516233,
0.0655088052,
-0.1148338318,
0.0331851281,
0.073486127,
-0.3805564046,
0.0945331976,
-0.1415010691,
0.0496979393,
0.0995936543,
-0.1345143169,
-0.2368710935,
-0.004... |
https://github.com/huggingface/datasets/issues/809 | Add Google Taskmaster dataset | Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now? | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation... | 27 | Add Google Taskmaster dataset
## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-dat... | [
-0.2614887953,
0.0552466623,
-0.1595136225,
0.2156595439,
0.1674281508,
-0.0379507989,
0.3336216807,
0.0670079514,
0.1323193163,
0.1240415946,
-0.2129541785,
0.2662529945,
-0.4864400029,
0.5283169746,
0.1895262748,
-0.0497275293,
0.0831428543,
-0.0871643871,
0.1714654714,
0.166... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | Hi !
The url works on my side.
Is the url working in your navigator ?
Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 30 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.3247119784,
0.0161048751,
-0.1088048592,
0.0260045994,
0.2420430928,
0.0198688172,
0.7173693776,
0.3756362498,
0.2509218454,
0.1337040961,
-0.0342322253,
0.1497043818,
0.079799369,
0.0211836994,
-0.1269313097,
-0.1772821993,
-0.1271575987,
0.2743317187,
-0.4187990725,
0.0144... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | > Hi !
> The url works on my side.
>
> Is the url working in your navigator ?
> Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
I tried another server, it's working now. Thanks a lot.
And I'm curious about why download things from "github" when I load dataset f... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 69 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.3247119784,
0.0161048751,
-0.1088048592,
0.0260045994,
0.2420430928,
0.0198688172,
0.7173693776,
0.3756362498,
0.2509218454,
0.1337040961,
-0.0342322253,
0.1497043818,
0.079799369,
0.0211836994,
-0.1269313097,
-0.1772821993,
-0.1271575987,
0.2743317187,
-0.4187990725,
0.0144... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR |
> > Hi !
> > The url works on my side.
> > Is the url working in your navigator ?
> > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
>
> I tried another server, it's working now. Thanks a lot.
>
> And I'm curious about why download things from "github" whe... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 103 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.3247119784,
0.0161048751,
-0.1088048592,
0.0260045994,
0.2420430928,
0.0198688172,
0.7173693776,
0.3756362498,
0.2509218454,
0.1337040961,
-0.0342322253,
0.1497043818,
0.079799369,
0.0211836994,
-0.1269313097,
-0.1772821993,
-0.1271575987,
0.2743317187,
-0.4187990725,
0.0144... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | hello, how did you solve this problems?
> > > Hi !
> > > The url works on my side.
> > > Is the url working in your navigator ?
> > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
> >
> >
> > I tried another server, it's working now. Thanks a lot.
> > And I'... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 136 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.3247119784,
0.0161048751,
-0.1088048592,
0.0260045994,
0.2420430928,
0.0198688172,
0.7173693776,
0.3756362498,
0.2509218454,
0.1337040961,
-0.0342322253,
0.1497043818,
0.079799369,
0.0211836994,
-0.1269313097,
-0.1772821993,
-0.1271575987,
0.2743317187,
-0.4187990725,
0.0144... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | > hello, how did you solve this problems?
>
> > > > Hi !
> > > > The url works on my side.
> > > > Is the url working in your navigator ?
> > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
> > >
> > >
> > > I tried another server, it's working now. Thanks ... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 155 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.3247119784,
0.0161048751,
-0.1088048592,
0.0260045994,
0.2420430928,
0.0198688172,
0.7173693776,
0.3756362498,
0.2509218454,
0.1337040961,
-0.0342322253,
0.1497043818,
0.079799369,
0.0211836994,
-0.1269313097,
-0.1772821993,
-0.1271575987,
0.2743317187,
-0.4187990725,
0.0144... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.