html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Is this a command to update my local files or fix the file Github repo in general? (I am not so familiar with the datasets-cli command here) I also took a brief look at the **Sharing your dataset** section, looks like I could fix that locally and push it to the repo? I guess we are "canonical" category?
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
58
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Is this a...
[ -0.3287452161, 0.227845639, 0.0171682481, -0.1485099643, -0.0278216824, 0.1295357794, 0.2159445584, 0.4029397666, 0.1252268106, -0.0533369668, 0.1292183399, -0.1483828574, 0.3661811948, 0.1626038402, 0.0930708945, 0.0371342711, 0.0882588625, 0.1164588034, -0.1444553882, -0.1393...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
This command will update your local file. Then you can open a Pull Request to push your fix to the github repo :) And yes you are right, it is a "canonical" dataset, i.e. a dataset script defined in this github repo (as opposed to dataset repositories of users on the huggingface hub)
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
53
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? This comm...
[ -0.2335668951, -0.1188602, 0.01896373, -0.1395545453, 0.0161119103, -0.005983829, 0.1757640392, 0.3258336484, 0.2067855448, -0.0491478145, 0.1204307824, -0.0448962301, 0.2096991241, 0.4580941498, 0.1885542125, -0.0856997594, 0.0732323378, 0.0046083336, -0.3236833811, -0.0383641...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi, thanks for the answer. I gave a try to the problem today. But I encountered an upload error: ``` git push -u origin fix_link_iwslt Enter passphrase for key '/home2/xuhuizh/.ssh/id_rsa': ERROR: Permission to huggingface/datasets.git denied to XuhuiZhou. fatal: Could not read from remote repository. P...
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
148
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi, thank...
[ -0.1731774956, -0.010389966, 0.057309635, -0.0393561572, 0.0538162701, -0.0124165909, 0.2563003004, 0.3127624094, 0.1133864149, 0.0201734807, 0.0313478038, 0.0189823192, 0.3240017295, 0.2409654856, 0.1655394733, 0.0832948908, 0.0935478732, 0.0070206583, -0.2977553308, -0.018183...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi ! To create a PR on this repo your must fork it and create a branch on your fork. See how to fork the repo [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment). And to make the command work without the `ExpectedMoreDownloadedFiles` error, you just need t...
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
45
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi ! To c...
[ -0.2560881078, -0.0967695341, 0.017853573, -0.0659617111, 0.0944882855, 0.0495946072, 0.1920960695, 0.2611151934, 0.163617149, 0.0419376343, 0.0050669471, -0.0133629981, 0.2525421381, 0.394125253, 0.3025113642, -0.1754236072, 0.1524585187, 0.0719697922, -0.2172750682, 0.0543561...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi @XuhuiZhou, As @lhoestq has well explained, you need to fork HF's repository, create a feature branch in your fork, push your changes to it and then open a Pull Request to HF's upstream repository. This is so because at HuggingFace Datasets we follow a development model called "Fork and Pull Model". You can find ...
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
126
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi @Xuhui...
[ -0.2110832632, -0.2354267091, 0.0539520606, -0.1203535199, -0.0160377808, 0.0573176518, 0.073691003, 0.3318696916, 0.2753102779, 0.036061056, -0.1463588029, -0.1400412619, 0.3278671205, 0.3584644198, 0.2050700635, -0.1643645465, 0.1016286835, 0.0192575175, -0.3486397862, -0.025...
https://github.com/huggingface/datasets/issues/2075
ConnectionError: Couldn't reach common_voice.py
Hi @LifaSun, thanks for reporting this issue. Sometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/ma...
20
ConnectionError: Couldn't reach common_voice.py When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https:/...
[ -0.4876464307, -0.0846989974, -0.0791776702, -0.0048034876, 0.3903559148, 0.0129869124, 0.270430088, 0.3016327322, -0.0631876141, 0.3089750707, -0.2126753628, -0.1347060353, 0.1226182729, 0.0170100797, -0.0094281342, -0.0931783244, -0.0044785971, 0.0029829929, 0.1925333142, 0.0...
https://github.com/huggingface/datasets/issues/2070
ArrowInvalid issue for squad v2 dataset
Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column. Indeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch. ...
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original co...
80
ArrowInvalid issue for squad v2 dataset Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize ...
[ 0.1950867027, -0.5012052655, 0.0041138758, 0.2735459805, 0.2830948532, -0.0630853325, 0.2918868065, 0.1244553998, -0.3849654794, 0.1932285577, 0.0684069395, 0.2712976038, 0.1175049618, -0.2701624334, -0.0708334893, -0.1942007393, 0.0438765846, -0.0185052399, 0.1663551182, -0.07...
https://github.com/huggingface/datasets/issues/2068
PyTorch not available error on SageMaker GPU docker though it is installed
Hey @sivakhno, how does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*a...
27
PyTorch not available error on SageMaker GPU docker though it is installed I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/p...
[ -0.3578641415, 0.0624307953, -0.0035596238, 0.0923951641, 0.2011191398, 0.0346485153, 0.5519387722, 0.3234076202, 0.3856209815, 0.0929586217, 0.000882448, 0.396581918, -0.0354305804, 0.1952464283, 0.0736840293, -0.2641964853, -0.0560294911, 0.2284690887, -0.310595721, 0.0267089...
https://github.com/huggingface/datasets/issues/2068
PyTorch not available error on SageMaker GPU docker though it is installed
Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. I have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same.
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*a...
25
PyTorch not available error on SageMaker GPU docker though it is installed I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/p...
[ -0.3578641415, 0.0624307953, -0.0035596238, 0.0923951641, 0.2011191398, 0.0346485153, 0.5519387722, 0.3234076202, 0.3856209815, 0.0929586217, 0.000882448, 0.396581918, -0.0354305804, 0.1952464283, 0.0736840293, -0.2641964853, -0.0560294911, 0.2284690887, -0.310595721, 0.0267089...
https://github.com/huggingface/datasets/issues/2068
PyTorch not available error on SageMaker GPU docker though it is installed
Could paste the code you use the start your training job and the fine-tuning script you run?
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*a...
17
PyTorch not available error on SageMaker GPU docker though it is installed I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/p...
[ -0.3578641415, 0.0624307953, -0.0035596238, 0.0923951641, 0.2011191398, 0.0346485153, 0.5519387722, 0.3234076202, 0.3856209815, 0.0929586217, 0.000882448, 0.396581918, -0.0354305804, 0.1952464283, 0.0736840293, -0.2641964853, -0.0560294911, 0.2284690887, -0.310595721, 0.0267089...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Hi ! Thanks for reporting. This looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful ! Otherwise I can try to run the wav2vec2 code above on my side but probably not this week..
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
48
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.0581422858, -0.4536789358, -0.0355622098, 0.2569918334, 0.0860945061, -0.0037738001, 0.1155310273, -0.0629603565, 0.0408042744, 0.3191868365, 0.2538126111, 0.0701291561, -0.307256341, -0.042545788, -0.2219942361, -0.246347174, 0.1892081499, -0.1294748038, 0.3075395823, 0.335...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4) ```
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
22
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.2164213806, -0.2695317268, -0.0307763387, 0.1850229055, 0.1784022152, 0.1376413852, 0.0575685389, -0.0025023592, 0.0087180454, 0.2766420543, 0.1489747614, 0.161060378, -0.2854522169, -0.1023179963, -0.1237754375, -0.1828473955, 0.1945978254, -0.1632982492, 0.2638875842, 0.34...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
I was able to copy some of the shell This is repeating every half second Win 10, Anaconda with python 3.8, datasets installed from main branche ``` File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\multiprocess\spawn.py", line 287, in _fixup_main_from_path _check_not_importing_main()...
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
433
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.1881469637, -0.3144048154, -0.0391352884, 0.1985248476, 0.1111998558, 0.004422795, 0.147380203, 0.0522425212, 0.0072061773, 0.1542068422, 0.038061697, -0.0369985178, -0.230133146, 0.004947972, -0.1641259193, -0.2017735839, 0.1229712963, -0.147796452, 0.1082703397, 0.31760066...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Thanks this is really helpful ! I'll try to reproduce on my side and come back to you
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
18
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.1506280601, -0.4665437937, -0.0850918368, 0.2313074768, 0.0865276307, 0.0101604424, 0.0336138345, -0.030732628, 0.0230881274, 0.2842901349, 0.1215856373, 0.0667374209, -0.3062892556, 0.0160494726, -0.1941300482, -0.1727217436, 0.0984338149, -0.1465356797, 0.2517451942, 0.324...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
if __name__ == '__main__': This line before calling the map function stops the error but the script still repeats endless
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
20
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.1108911186, -0.4392868578, -0.0885382891, 0.2090214342, 0.0622632839, 0.0388560779, 0.005569844, 0.0343123823, 0.1305547655, 0.3507120311, 0.1845918596, 0.1130339652, -0.2703491449, -0.0625601634, -0.1630699337, -0.0991827175, 0.0958347321, -0.0558622554, 0.2845199704, 0.306...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https://stackoverflow.com/a/18205006): > On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses ...
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
59
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.1351818293, -0.4226137698, -0.1194845662, -0.0176143236, 0.1080634296, -0.0960123762, 0.1320637316, 0.0308470149, -0.1013916209, 0.3602204323, -0.0551886633, 0.0946900547, -0.3736595809, -0.1421169192, -0.1579381973, -0.0196445417, 0.0329826511, -0.1128273606, 0.047878623, 0...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
``` Traceback (most recent call last): File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\shutil.py", line 791, in move os.rename(src, real_dst) FileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\huggingfacecache\\common_voice\\de\\6.1.0\\0041e06ab061b91d0...
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
224
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.0126368422, -0.3874339461, -0.0540231504, 0.2022801042, 0.1353474557, 0.0187554825, 0.0908481181, 0.0778498054, 0.0393030718, 0.2612572014, 0.0116537744, -0.0717008859, -0.300147742, -0.0775014982, -0.1249799281, -0.2181283683, 0.1993367672, -0.1225007549, 0.2698474824, 0.35...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Usually OSError of an arrow file on windows means that the file is currently opened as a dataset object, so you can't overwrite it until the dataset object falls out of scope. Can you make sure that there's no dataset object that loaded the `cache-9b4f203a63742dfc.arrow` file ?
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
47
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.3012990355, -0.2888681293, -0.0838766098, 0.1985546947, 0.0243260879, -0.0525321066, 0.0652538985, -0.0146975806, 0.0848994255, 0.2829810977, 0.1703790575, 0.2638421655, -0.3119596243, -0.1078653336, -0.3288582563, -0.1583806276, 0.0069895303, -0.0020368753, 0.1382289231, 0....
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Now I understand The error occures because the script got restarted in another thread, so the object is already loaded. Still don't have an idea why a new thread starts the whole script again
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
34
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.1712802351, -0.6625726223, -0.0156674422, 0.3150124848, -0.0103679048, -0.0258935057, 0.1337392032, -0.1321053058, 0.0881272927, 0.2671470046, 0.2327954173, 0.0681706592, -0.1253812611, -0.0673757866, -0.1413183808, -0.0939046815, 0.0912754834, -0.0172673017, 0.2123173773, 0...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Hi ! Thanks for reporting. Currently there's no way to specify this. When loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
67
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1139015034, 0.2815681696, -0.078934975, 0.1177363172, -0.0259674694, 0.0905663073, 0.4060144722, 0.1146639213, -0.0934795439, -0.0789992288, -0.1739812642, 0.1220879406, -0.0590264425, -0.209066838, -0.0211600438, 0.0670155138, 0.1375489682, 0.1199801043, -0.0209173057, -0.0...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Hi @lhoestq, I looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arro...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
47
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ 0.0629380941, 0.035980925, -0.1101711914, 0.2581421733, 0.0601992607, 0.1329316199, 0.346363008, 0.0934352875, -0.2962670326, -0.0303033683, -0.2803180814, -0.052603744, 0.0845716819, -0.3134354651, -0.0208832379, 0.0789134577, 0.0993252993, 0.046586249, 0.0940524414, -0.024798...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
45
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1486677676, 0.1404386163, -0.0772055015, 0.0294159204, 0.1295793951, 0.0463582538, 0.3478870988, 0.0346235782, -0.1299448311, 0.0200760011, -0.1389163733, -0.0676593333, 0.0185788739, -0.2712924778, -0.0381217822, 0.184007436, 0.1872668862, -0.0505725369, 0.0260023084, -0.15...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix.
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
45
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1471500695, 0.092066586, -0.0505224541, -0.050575342, -0.1024407223, 0.0650751814, 0.2770628035, 0.0333130471, -0.1205726936, 0.0754825622, -0.0888409913, -0.1201790124, 0.0910979286, -0.421970278, -0.1925660521, 0.2325335294, 0.1333577484, -0.0176634323, 0.1507477909, -0.15...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Note that you can get the cache files of a dataset with the `cache_files` attributes. Then you can `chmod` those files and all the other cache files in the same directory. Moreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_da...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
75
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1194678247, 0.259424001, -0.0967043117, 0.0303851962, 0.0231790356, 0.1083496511, 0.3391304314, 0.2117254138, -0.3362344801, -0.0368233137, -0.1443871558, 0.02206029, 0.0109384349, -0.2525071204, -0.030132547, 0.0743882433, 0.227810353, 0.1208136901, -0.1184285656, -0.077808...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
20
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1428572983, 0.247191906, -0.1365760118, 0.0218380671, -0.0611332282, 0.1519797444, 0.2972183824, 0.1749405861, -0.2172028422, -0.0215460751, -0.0839368999, -0.05868081, 0.0317839496, -0.2856926024, -0.0319772884, 0.1332599819, 0.1821878999, 0.0443538725, 0.09654852, -0.06093...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions. I was referring to this. Ensuring that newly generated `cache_files` have the same permissions
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
43
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1649945378, 0.2504472435, -0.1311803758, 0.0018535872, 0.016402984, 0.0696857721, 0.3859350979, 0.2012925148, -0.2431758046, -0.0268523376, -0.0832430199, 0.0317554809, 0.0088324305, -0.2032844871, -0.0566227026, 0.1244932339, 0.2251380682, 0.1058565378, -0.0469127782, -0.07...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Yes exactly I imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
36
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.0982317701, 0.2663791478, -0.1286434084, 0.0853283852, -0.1320200711, 0.0446738563, 0.3324170113, 0.118530862, -0.2356888205, -0.0621480606, -0.0989210829, 0.0370315984, 0.0195825957, -0.2461389899, -0.0805590227, 0.0872822478, 0.1227617264, 0.1072843671, -0.0332508348, -0.0...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
51
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.0215069037, 0.3034678698, -0.1613318473, -0.0614499636, -0.0943000093, 0.0116145983, 0.3593586683, 0.2102661431, -0.2881744504, 0.0673981607, 0.0218293946, -0.1237790212, 0.0104281837, -0.2228576392, -0.1488353312, 0.1429992467, 0.1225393414, 0.1753846407, 0.0643008575, -0.0...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
23
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1157245561, 0.1915661544, -0.1204444394, 0.0102189155, -0.0046566022, 0.0738375112, 0.354893744, 0.0830472782, -0.0977412835, 0.0234236158, -0.1051576287, 0.1137062684, 0.0743693709, -0.2714213431, -0.0607255846, 0.1525206864, 0.2136560231, 0.0242684763, 0.031402085, -0.1150...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
28
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1360279322, 0.235768497, -0.1311272085, 0.0613819696, -0.0553659126, 0.0940650776, 0.3185191154, 0.1379299611, -0.1731731147, 0.0212594587, -0.0261568185, 0.0268606544, 0.0603255518, -0.2584317923, -0.0404043496, 0.0852193683, 0.1406023651, 0.0469480008, 0.0417575464, -0.063...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users. For example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use ...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
123
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1335192621, 0.2638702393, -0.0887797996, -0.0136510385, -0.0483199582, 0.1047769263, 0.3552220464, 0.1191609427, -0.1658601463, 0.0378847234, -0.1065237299, -0.024867354, -0.019191971, -0.3137135506, -0.1179800183, 0.1778142303, 0.1263327897, 0.0858632103, 0.0602511838, -0.1...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Maybe let's start by defaulting to the user's umask ! Do you want to give it a try @bhavitvyamalik ?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
20
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1802951694, 0.1989278644, -0.1055654213, -0.0858239308, 0.0109792966, 0.0972138494, 0.3096799254, 0.0719593614, -0.1649408191, 0.0873356611, -0.0806743279, 0.0011492542, 0.065245986, -0.2650190294, -0.0587539822, 0.1390801072, 0.1544880718, -0.0080778683, -0.0103488108, -0.0...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
40
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.2242728323, 0.2261130214, -0.0817529857, 0.0094755189, 0.0471358672, 0.0149757145, 0.2634159029, 0.0719239488, -0.191496104, 0.0407302305, -0.1341531277, 0.0863358602, 0.0493937656, -0.3142189384, -0.1167452484, 0.1358983964, 0.1529259533, -0.0015894095, -0.0006152364, -0.09...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
30
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1857261211, 0.1737921089, -0.1075275838, -0.1851043403, -0.0081703532, 0.0839047879, 0.3565378189, 0.0757275075, -0.1108320281, 0.1052553654, -0.2269388586, 0.1543241888, 0.1039313897, -0.2934494317, -0.0649804473, 0.2179357409, 0.0900316983, -0.0351475365, -0.0519522093, -0...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well. thanks @thomwolf for the pointer.
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
29
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1207441166, 0.2110619843, -0.0841050595, 0.0468186028, -0.0028636423, 0.1459372342, 0.3854050636, 0.1391603798, -0.201168403, -0.0585674644, -0.1354778409, -0.0176229645, 0.1156199351, -0.2905228734, -0.0399505459, 0.117588222, 0.1337424666, 0.0035108118, -0.03668743, -0.083...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Hi @stas00, For this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
18
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.2081564963, 0.1117954925, -0.06123624, 0.0407365635, -0.0061200503, 0.1279417574, 0.3894427121, 0.0571592487, -0.2127815485, -0.0419887863, -0.1921855807, -0.1231596395, 0.1095726267, -0.3343215287, -0.0680714771, 0.1472347081, 0.1823889911, -0.0031140426, -0.0365215428, -0....
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
@lhoestq Adding "_" to the class labels in the dataset script will fix the issue. The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
35
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.3464029729, -0.0752864107, 0.0026137996, 0.4443908632, 0.3713297844, 0.0691191331, 0.2405687869, 0.039475292, 0.5101771951, 0.206124723, -0.3589057028, 0.0527048111, 0.0860962719, 0.080866091, 0.1144125089, -0.3151046634, -0.0602491498, 0.0173559412, -0.2503676713, -0.065504...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
Hi ! Thanks for reporting @adzcodez > @lhoestq Adding "_" to the class labels in the dataset script will fix the issue. > > The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences. You're right: "_" should be added to the list of labels, and the examples m...
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
66
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.3464029729, -0.0752864107, 0.0026137996, 0.4443908632, 0.3713297844, 0.0691191331, 0.2405687869, 0.039475292, 0.5101771951, 0.206124723, -0.3589057028, 0.0527048111, 0.0860962719, 0.080866091, 0.1144125089, -0.3151046634, -0.0602491498, 0.0173559412, -0.2503676713, -0.065504...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
@lhoestq Can you please label this issue with the "good first issue" label? I'm not sure I'll find time to fix this. To resolve it, the user should: 1. add `"_"` to the list of labels 2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https:/...
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
84
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.3464029729, -0.0752864107, 0.0026137996, 0.4443908632, 0.3713297844, 0.0691191331, 0.2405687869, 0.039475292, 0.5101771951, 0.206124723, -0.3589057028, 0.0527048111, 0.0860962719, 0.080866091, 0.1144125089, -0.3151046634, -0.0602491498, 0.0173559412, -0.2503676713, -0.065504...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
I tried fixing this issue, but its working fine in the dev version : "1.6.2.dev0" I think somebody already fixed it.
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
21
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.3464029729, -0.0752864107, 0.0026137996, 0.4443908632, 0.3713297844, 0.0691191331, 0.2405687869, 0.039475292, 0.5101771951, 0.206124723, -0.3589057028, 0.0527048111, 0.0860962719, 0.080866091, 0.1144125089, -0.3151046634, -0.0602491498, 0.0173559412, -0.2503676713, -0.065504...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
Hi, after #2326, the lines with pos tags equal to `"_"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free t...
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
70
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.3464029729, -0.0752864107, 0.0026137996, 0.4443908632, 0.3713297844, 0.0691191331, 0.2405687869, 0.039475292, 0.5101771951, 0.206124723, -0.3589057028, 0.0527048111, 0.0860962719, 0.080866091, 0.1144125089, -0.3151046634, -0.0602491498, 0.0173559412, -0.2503676713, -0.065504...
https://github.com/huggingface/datasets/issues/2059
Error while following docs to load the `ted_talks_iwslt` dataset
This has been fixed in #2064 by @mariosasko (thanks again !) The fix is available on the master branch and we'll do a new release very soon :)
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error ...
28
Error while following docs to load the `ted_talks_iwslt` dataset I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("...
[ -0.2140098512, 0.1284444332, 0.0485114902, 0.2001313716, 0.0312911794, 0.0933724716, 0.6712163687, 0.1944086105, 0.124102287, -0.0619185939, -0.0495756231, 0.3381052315, -0.2910546958, 0.4190411866, 0.0773886517, -0.2890877426, 0.0997466967, 0.0731820986, -0.0559581779, 0.20605...
https://github.com/huggingface/datasets/issues/2056
issue with opus100/en-fr dataset
@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked...
22
issue with opus100/en-fr dataset Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advanc...
[ -0.4117368162, -0.1673515141, -0.0156451836, 0.3972487152, 0.1182574853, 0.0823765546, 0.192252174, 0.2819979489, -0.1431498379, 0.2579927444, -0.2424351871, -0.0248961765, 0.1322235465, 0.3834213614, -0.2399136871, -0.172585234, -0.078272, 0.1208089516, -0.4165076315, -0.12028...
https://github.com/huggingface/datasets/issues/2056
issue with opus100/en-fr dataset
Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast ``` from datasets import load_dataset from transformers import MT5TokenizerFast def get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer): datasets = load_dataset(dataset_name, data...
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked...
114
issue with opus100/en-fr dataset Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advanc...
[ -0.4117368162, -0.1673515141, -0.0156451836, 0.3972487152, 0.1182574853, 0.0823765546, 0.192252174, 0.2819979489, -0.1431498379, 0.2579927444, -0.2424351871, -0.0248961765, 0.1322235465, 0.3834213614, -0.2399136871, -0.172585234, -0.078272, 0.1208089516, -0.4165076315, -0.12028...
https://github.com/huggingface/datasets/issues/2056
issue with opus100/en-fr dataset
as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one.
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked...
23
issue with opus100/en-fr dataset Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advanc...
[ -0.4117368162, -0.1673515141, -0.0156451836, 0.3972487152, 0.1182574853, 0.0823765546, 0.192252174, 0.2819979489, -0.1431498379, 0.2579927444, -0.2424351871, -0.0248961765, 0.1322235465, 0.3834213614, -0.2399136871, -0.172585234, -0.078272, 0.1208089516, -0.4165076315, -0.12028...
https://github.com/huggingface/datasets/issues/2055
is there a way to override a dataset object saved with save_to_disk?
I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file, ``` dataset_with_embedding =csv_dataset.map( partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=s...
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
69
is there a way to override a dataset object saved with save_to_disk? At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am t...
[ 0.0187767781, -0.1548895091, -0.0146413958, -0.039853096, 0.2449169457, 0.27030316, 0.1437665373, 0.1278597862, 0.1100361273, 0.1771550775, 0.350451678, 0.4532105327, -0.2038120478, -0.0362367705, 0.2456677556, 0.1528114676, 0.4748809338, -0.0202536061, -0.0256068241, 0.2713933...
https://github.com/huggingface/datasets/issues/2055
is there a way to override a dataset object saved with save_to_disk?
I'm not sure I understand your issue, can you elaborate ? `cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
48
is there a way to override a dataset object saved with save_to_disk? At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? I'm not sure I understand your issue, can you elaborate ? `cache_file_name` is indeed an argument you can set to ...
[ 0.005880767, -0.0709292591, -0.045395609, 0.0091857668, 0.2773498893, 0.2527235746, 0.2612769604, 0.2426068187, -0.0535697229, 0.1069627926, 0.302035749, 0.4447039962, -0.1616927981, -0.127957508, 0.2059168369, 0.006320091, 0.3146952689, -0.1188133508, -0.0984395891, 0.17643161...
https://github.com/huggingface/datasets/issues/2055
is there a way to override a dataset object saved with save_to_disk?
Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset obje...
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
134
is there a way to override a dataset object saved with save_to_disk? At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (...
[ 0.0457347296, -0.036520455, -0.0065598618, -0.0663141087, 0.2652108967, 0.1467555761, 0.0534214415, 0.1336882412, -0.1323261708, 0.3330437541, 0.3283967674, 0.4801913202, -0.3888705373, 0.0933783948, 0.2319838256, 0.0744828135, 0.3734701276, -0.0670182034, -0.1254187822, 0.1810...
https://github.com/huggingface/datasets/issues/2054
Could not find file for ZEST dataset
This has been fixed in #2057 by @matt-peters (thanks again !) The fix is available on the master branch and we'll do a new release very soon :)
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: ...
28
Could not find file for ZEST dataset I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and pre...
[ -0.5492255688, -0.1562175602, -0.0973051786, 0.3641073108, 0.3012050092, 0.0931076631, 0.0318724141, 0.2230130881, 0.2383637428, 0.4310352504, -0.2034215778, -0.0097505981, -0.1162818447, 0.1218978018, -0.0843501762, 0.2447762489, -0.2308544964, 0.4342671633, 0.4354298115, 0.08...
https://github.com/huggingface/datasets/issues/2052
Timit_asr dataset repeats examples
Hi, this was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: ```bash pip install git+https://github.com/huggingface/datasets ```
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text']...
32
Timit_asr dataset repeats examples Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset...
[ 0.0693236738, -0.2360806763, 0.0143214399, 0.3875416517, 0.2564871609, -0.0527752116, 0.3894460201, 0.1509235799, -0.3530937433, 0.2145140916, 0.03625625, 0.3838867843, -0.1914108992, 0.3149207532, 0.1428201497, 0.0897243023, -0.0645081997, 0.1746295989, -0.510945797, -0.207854...
https://github.com/huggingface/datasets/issues/2050
Build custom dataset to fine-tune Wav2Vec2
Sure you can use the json loader ```python data_files = {"train": "path/to/your/train_data.json", "test": "path/to/your/test_data.json"} train_dataset = load_dataset("json", data_files=data_files, split="train") test_dataset = load_dataset("json", data_files=data_files, split="test") ``` You just need to make s...
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
51
Build custom dataset to fine-tune Wav2Vec2 Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript ...
[ -0.0861031339, 0.0252728332, 0.0131383101, 0.0565956756, -0.0154162087, -0.0016973368, 0.0401862338, 0.167280525, 0.1113970429, -0.0323932879, -0.2139551491, 0.4168871343, -0.2158390582, 0.1142554954, -0.0266242996, 0.0259717833, -0.1856243461, 0.2257935852, 0.1270138472, -0.02...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
I think faiss automatically sets the number of threads to use to build the index. Can you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
47
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
Hi, I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare...
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
108
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
Can you try to set the number of threads manually ? If you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time. You can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asyn...
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
49
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hr...
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
91
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me. https://github.com/matsui528/faiss_tips
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
45
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
@lhoestq Hi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends ...
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
121
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
It's a matter of tradeoffs. HSNW is fast at query time but takes some time to build. A flat index is flat to build but is "slow" at query time. An IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW). Note that for an IVF index you would need to have an ...
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
181
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
@lhoestq Thanks a lot for sharing all this prior knowledge. Just asking what would be a good nlist of parameters for 30 million embeddings?
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
24
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`. For more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
25
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon.
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
56
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.4988242388, -0.2682781816, -0.0245006643, 0.1282280535, 0.0594395846, 0.2077843398, 0.1214630306, 0.4241298139, 0.2917420566, 0.2924969494, -0.11882516, 0.1883150488, 0.137212798, 0.0858211592, -0.1362563074, 0.1750426441, 0.2370328307, 0.0853345543, 0.2607981563, -0.1344140...
https://github.com/huggingface/datasets/issues/2040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ? They should both have a path to an arrow file Also note that from #2025 concatenating datasets will no longer hav...
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yie...
41
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([...
[ -0.0112980474, -0.0258977618, -0.044959303, 0.507442534, 0.1715030521, 0.1827168465, 0.0579557084, 0.1286475658, -0.0628907159, 0.1336226612, 0.0703733191, 0.2955009043, -0.0843801424, -0.1025009379, -0.3257563114, -0.1238993853, 0.2124685496, -0.041784659, -0.1874170452, -0.11...
https://github.com/huggingface/datasets/issues/2040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Sure, thanks for the fast reply! For dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]` For dataset B: `[]` No clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the fold...
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yie...
43
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([...
[ -0.0077576935, -0.0073159589, -0.0188266672, 0.5390485525, 0.2280754, 0.1842921227, 0.053107705, 0.1309664845, -0.0564402267, 0.1484983116, 0.0628860742, 0.2342553437, -0.0325046629, -0.1666058004, -0.3268121183, -0.0597666688, 0.2327088118, 0.007199544, -0.0789072067, -0.11945...
https://github.com/huggingface/datasets/issues/2040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk). For now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with ```python dataset = datas...
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yie...
59
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([...
[ -0.0903854072, -0.0803098753, -0.0217146855, 0.4296607971, 0.212290287, 0.266731441, 0.0407161713, 0.1795160323, -0.0718927681, 0.174695462, -0.0009915812, 0.1995583177, -0.0881772488, -0.0326923952, -0.3439052701, -0.0558384322, 0.193051815, 0.0187105779, -0.1041580811, 0.0098...
https://github.com/huggingface/datasets/issues/2040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot!
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yie...
16
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([...
[ -0.0483819954, -0.0796561092, -0.0216173735, 0.4603260458, 0.1872409582, 0.2654177845, 0.0407088846, 0.1193433106, -0.0479681939, 0.148567006, 0.0525872968, 0.1789599061, -0.0630661473, -0.0712665319, -0.298771292, -0.0633007959, 0.2559679151, -0.0143303107, -0.1243878826, -0.0...
https://github.com/huggingface/datasets/issues/2038
outdated dataset_infos.json might fail verifications
Hi ! Thanks for reporting. To update the dataset_infos.json you can run: ``` datasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications ```
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
20
outdated dataset_infos.json might fail verifications The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update ...
[ -0.1200924441, 0.1984145492, -0.1118750274, 0.1850013286, 0.1130968705, 0.2163037509, 0.103563495, 0.4946838915, 0.206972301, -0.0703192726, 0.0729326755, 0.0472202078, 0.1981548965, 0.2417431474, -0.0689257905, -0.0911236778, -0.0267711692, 0.2664071321, 0.0681160688, 0.088586...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error: ``` dataset = load_dataset("wikipedia", "20200501.bg") print(dataset) ``` Your library is my only chance to be able train...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
62
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
Hi @dorost1234, Try installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner. `dataset = load_dataset("wiki40b", "cs", beam_runner='DirectRunner')` I also read in error stack trace that: > Trying to generate a dataset ...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
83
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
41
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
Hello @dorost1234, Indeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing. For some specific default parameters (English Wikipedia), Hugging Face has already preprocessed t...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
94
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
Hi I really appreciate if huggingface can kindly provide preprocessed datasets, processing these datasets require sufficiently large resources and I do not have unfortunately access to, and perhaps many others too. thanks On Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral < ***@***.***> wrote: > Hello @dorost...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
185
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
Hi everyone thanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, `Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
65
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
HI @dorost1234, The dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used ...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
65
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
Hi thanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset: `from datasets import load_dataset dataset = load_dataset("wiki40b", "cs", beam_runner='DirectRunner')` the output I see if different also from what you see after writing this command: `Downlo...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
212
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me: ``` >>> dataset = load_dataset("wiki40b", "cs", beam_runner='DirectRunner') Downloading: 5.26kB [00:00, 1.23MB/s] ...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
156
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2035
wiki40b/wikipedia for almost all languages cannot be downloaded
Hi I honestly also now tried on another machine and nothing shows up after hours of waiting. Are you sure you have not set any specific setting? maybe google cloud which seems it is used here, needs some credential setting? thanks for any suggestions on this On Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@*...
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
259
wiki40b/wikipedia for almost all languages cannot be downloaded Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm...
[ -0.2544981837, -0.0779208615, -0.1537582725, 0.4308664799, 0.3997030258, 0.3504616916, 0.136850521, 0.5331581831, 0.1893272549, 0.0235845894, -0.1773036271, -0.094217591, 0.0944036171, 0.0111841382, -0.0297050904, -0.4591732919, -0.067290552, 0.0295369718, -0.1352711469, -0.116...
https://github.com/huggingface/datasets/issues/2031
wikipedia.py generator that extracts XML doesn't release memory
Hi @miyamonz Thanks for investigating this issue, good job ! It would be awesome to integrate your fix in the library, could you open a pull request ?
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikip...
28
wikipedia.py generator that extracts XML doesn't release memory I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob...
[ 0.2186527103, -0.1109621823, -0.0523671359, 0.6246848106, 0.3160347939, 0.0942654833, -0.2117364407, 0.3332335949, 0.2093813419, 0.2579503655, 0.0060538808, 0.1517626941, 0.232789889, -0.1265529841, -0.2175300866, -0.2103437781, 0.0437291563, 0.0743471459, 0.0371217132, -0.2225...
https://github.com/huggingface/datasets/issues/2029
Loading a faiss index KeyError
In your code `dataset2` doesn't contain the "embeddings" column, since it is created from the pandas DataFrame with columns "text" and "label". Therefore when you call `dataset2[embeddings_name]`, you get a `KeyError`. If you want the "embeddings" column back, you can create `dataset2` with ```python dataset2 =...
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (d...
65
Loading a faiss index KeyError I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a...
[ 0.0550746396, -0.6190828681, 0.0702501833, 0.361313343, 0.1553998291, 0.271070689, 0.3453340828, 0.0510572679, 0.5391326547, 0.2844027579, -0.0778137073, 0.1564424038, 0.409176439, -0.0636666939, -0.0600454509, -0.0365853496, 0.2299644053, 0.2709700763, 0.2771179974, -0.1172368...
https://github.com/huggingface/datasets/issues/2029
Loading a faiss index KeyError
Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I copy-pasted it here. > When you are done with your queries you can save your index on disk: > > ```python > ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss...
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (d...
57
Loading a faiss index KeyError I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a...
[ 0.0172794405, -0.5772310495, 0.0631229579, 0.279512912, 0.08014974, 0.2722066343, 0.3179303706, 0.1032483205, 0.5430010557, 0.2702878416, -0.1154732406, 0.1119507924, 0.4107091725, -0.1057186797, -0.0582558289, -0.019388089, 0.2288172841, 0.2564586699, 0.2251709551, -0.13394226...
https://github.com/huggingface/datasets/issues/2029
Loading a faiss index KeyError
Hi ! The code of the example is valid. An index is a search engine, it's not considered a column of a dataset. When you do `ds.load_faiss_index("embeddings", 'my_index.faiss')`, it attaches an index named "embeddings" to the dataset but it doesn't re-add the "embeddings" column. You can list the indexes of a datas...
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (d...
119
Loading a faiss index KeyError I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a...
[ 0.1397082061, -0.5335462689, 0.0514716692, 0.3196976483, 0.126474604, 0.2750567198, 0.4119960666, -0.0404782631, 0.6586285233, 0.2073544115, -0.0624134056, 0.1538582444, 0.3866051137, -0.0489015877, 0.0130300531, -0.017361477, 0.3034027517, 0.2208625078, 0.2414612025, -0.156598...
https://github.com/huggingface/datasets/issues/2029
Loading a faiss index KeyError
> If I understand correctly by reading this example you thought that it was re-adding the "embeddings" column. Yes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index` Wh...
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (d...
115
Loading a faiss index KeyError I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a...
[ 0.1141660064, -0.6093640327, 0.0622364432, 0.3725111783, 0.1373019069, 0.286283344, 0.4066282809, 0.0279657934, 0.561445415, 0.2074248046, -0.0576390624, 0.1514220685, 0.4737912714, -0.0607360303, -0.0529890433, -0.0416154042, 0.270290494, 0.2431623936, 0.2737346292, -0.1626691...
https://github.com/huggingface/datasets/issues/2026
KeyError on using map after renaming a column
Hi, Actually, the error occurs due to these two lines: ```python raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') ``` `Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new colum...
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),...
42
KeyError on using map after renaming a column Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ...
[ 0.0343903154, 0.0062004873, -0.0635801479, -0.334228307, 0.4808439612, 0.2616390288, 0.6042661071, 0.2373320013, 0.137091592, 0.0865909606, 0.0540353619, 0.52703017, -0.1558954418, 0.2994628847, -0.1945008039, -0.0881170109, 0.4133821428, 0.0935038179, -0.1156335995, 0.19720198...
https://github.com/huggingface/datasets/issues/2026
KeyError on using map after renaming a column
Hi @mariosasko, Thanks for opening a PR on this :) Why does the old name also disappear?
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),...
17
KeyError on using map after renaming a column Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ...
[ 0.0343903154, 0.0062004873, -0.0635801479, -0.334228307, 0.4808439612, 0.2616390288, 0.6042661071, 0.2373320013, 0.137091592, 0.0865909606, 0.0540353619, 0.52703017, -0.1558954418, 0.2994628847, -0.1945008039, -0.0881170109, 0.4133821428, 0.0935038179, -0.1156335995, 0.19720198...
https://github.com/huggingface/datasets/issues/2026
KeyError on using map after renaming a column
I just merged a @mariosasko 's PR that fixes this issue. If it happens again, feel free to re-open :)
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),...
20
KeyError on using map after renaming a column Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ...
[ 0.0343903154, 0.0062004873, -0.0635801479, -0.334228307, 0.4808439612, 0.2616390288, 0.6042661071, 0.2373320013, 0.137091592, 0.0865909606, 0.0540353619, 0.52703017, -0.1558954418, 0.2994628847, -0.1945008039, -0.0881170109, 0.4133821428, 0.0935038179, -0.1156335995, 0.19720198...
https://github.com/huggingface/datasets/issues/2022
ValueError when rename_column on splitted dataset
Hi, This is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`. To overcome this issue, use the named sp...
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_datase...
66
ValueError when rename_column on splitted dataset Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('trai...
[ -0.0835752711, 0.2197993249, -0.0348291732, -0.0423847027, 0.4208770394, 0.0730622113, 0.6438040733, 0.4179843366, -0.022552656, 0.3396532536, -0.086686641, 0.3978037238, -0.0521847792, 0.3644354939, -0.2562584579, -0.2477058023, 0.1044138446, -0.0795423463, 0.1267364472, 0.269...
https://github.com/huggingface/datasets/issues/2022
ValueError when rename_column on splitted dataset
This has been fixed in #2043 , thanks @mariosasko The fix is available on master and we'll do a new release soon :) feel free to re-open if you still have issues
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_datase...
32
ValueError when rename_column on splitted dataset Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('trai...
[ -0.0835752711, 0.2197993249, -0.0348291732, -0.0423847027, 0.4208770394, 0.0730622113, 0.6438040733, 0.4179843366, -0.022552656, 0.3396532536, -0.086686641, 0.3978037238, -0.0521847792, 0.3644354939, -0.2562584579, -0.2477058023, 0.1044138446, -0.0795423463, 0.1267364472, 0.269...
https://github.com/huggingface/datasets/issues/2021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
Hi, Can you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching.
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a seri...
19
Interactively doing save_to_disk and load_from_disk corrupts the datasets object? dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the...
[ -0.0748547912, -0.1469715089, 0.0561723448, 0.7758010626, 0.2905217111, 0.3213841021, -0.2317182869, 0.1311902255, 0.1942947507, 0.1316705048, -0.150269255, 0.0668350905, 0.2389937788, 0.2033747882, 0.1379985064, 0.2394485772, 0.4092519283, -0.1022172719, -0.2969143093, -0.0024...
https://github.com/huggingface/datasets/issues/2012
No upstream branch
What's the issue exactly ? Given an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`. It's mentioned at the beginning how to add the `upstream` remote repository https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9...
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
32
No upstream branch Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote. What's the issue exactly ? Given an `upstream` remote repository with...
[ -0.0541968048, -0.3429027796, -0.0709575489, -0.2234123796, 0.1322250813, 0.0014154331, 0.1212345138, 0.0149747608, -0.4329248965, 0.1663282663, 0.0103838434, -0.050456278, 0.1446349323, 0.2027384937, 0.0767324567, -0.0622125678, 0.2159231454, -0.1112824827, 0.0094954912, -0.38...
https://github.com/huggingface/datasets/issues/2012
No upstream branch
~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
29
No upstream branch Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote. ~~What difference is there with the default `origin` remote that is set ...
[ -0.1526435763, -0.3908179402, -0.0568684936, -0.3803965151, -0.0535283312, -0.1234397143, 0.3151806891, -0.002233498, -0.3539058864, 0.2694294453, -0.0092936307, -0.046628058, 0.3955131471, 0.2351269126, 0.2467385978, -0.0680184066, 0.3274164796, -0.1903981566, 0.2409148216, -0...
https://github.com/huggingface/datasets/issues/2010
Local testing fails
I'm not able to reproduce on my side. Can you provide the full stacktrace please ? What version of `python` and `dill` do you have ? Which OS are you using ?
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED...
32
Local testing fails I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and ...
[ -0.1566001028, 0.0945400521, 0.0029506723, 0.0519756861, -0.1364505738, -0.2591091096, 0.4086019695, 0.2240648717, -0.1059592143, 0.2676887214, -0.0058871666, 0.0788333043, -0.1537663192, 0.5032164454, -0.2350833565, 0.1064348742, 0.0292591713, 0.1519075334, -0.2006690502, 0.07...
https://github.com/huggingface/datasets/issues/2010
Local testing fails
``` co_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0] def create_ipython_func(co_filename, returned_obj): def func(): ...
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED...
47
Local testing fails I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and ...
[ -0.1595423967, 0.0836865157, 0.0080742138, 0.0541946329, -0.0605915748, -0.253282249, 0.432231307, 0.321085602, 0.0665907636, 0.2115700543, -0.0241567474, 0.1039156392, -0.1923718899, 0.4951998889, -0.2158737332, 0.1359182149, 0.0552796684, 0.1934275925, -0.1318419129, 0.083860...
https://github.com/huggingface/datasets/issues/2010
Local testing fails
I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8 I opened a PR to fix this test Thanks !
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED...
27
Local testing fails I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and ...
[ -0.2129509002, 0.082364887, 0.0238413885, 0.0946623161, 0.049615398, -0.1908243597, 0.3812553585, 0.3104431629, 0.0505839363, 0.2253709882, 0.1375240833, 0.110114947, -0.1860498637, 0.6575947404, -0.1123829037, 0.1562343538, 0.0674786791, 0.2533710301, -0.0266020335, 0.13123796...
https://github.com/huggingface/datasets/issues/2009
Ambiguous documentation
Hi @theo-m ! A few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects: ```python datasets.SplitGenerator( name=datasets.Split.VALIDATION, # These kwargs will be passed to _generate_examples gen_kwargs={ "filepath": os.path.jo...
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR...
79
Ambiguous documentation https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming f...
[ 0.0630257651, -0.0559888333, -0.0537044108, 0.1070299968, 0.0235675387, 0.1693240404, 0.3407928348, 0.1025089473, -0.1445971131, -0.1650756449, 0.0825749785, 0.3592205048, 0.0475475043, 0.0229658764, 0.1957674623, -0.226720646, 0.0967700556, 0.1418140978, -0.0160179399, -0.1553...
https://github.com/huggingface/datasets/issues/2009
Ambiguous documentation
Oh ok I hadn't made the connection between those two, will offer a tweak to the comment and the template then - thanks!
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR...
23
Ambiguous documentation https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming f...
[ 0.090754047, -0.178482309, -0.0821814984, -0.1347765028, 0.2026105374, 0.1043045893, 0.4321021438, 0.0742143989, -0.0925438553, -0.1297512203, 0.1296099722, 0.2071549743, 0.0149753541, 0.1009715125, 0.3126717806, 0.0319482237, 0.2020218819, 0.0814610869, -0.1270152628, -0.18600...
https://github.com/huggingface/datasets/issues/2007
How to not load huggingface datasets into memory
So maybe a summary here: If I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_...
36
How to not load huggingface datasets into memory Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_...
[ -0.1572741568, -0.5049954653, 0.0060413675, 0.5300238729, 0.5585084558, 0.0471686386, 0.0790676922, 0.2601301372, 0.3925335705, 0.1179506108, -0.0250126366, -0.2572236061, -0.3355447352, 0.3502959311, 0.0798836648, -0.1671073884, 0.0480983369, -0.009338446, -0.4906492829, 0.081...
https://github.com/huggingface/datasets/issues/2007
How to not load huggingface datasets into memory
The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM. The only thing that's loaded into memory during training is the batch used in the training step. So as long as your model works with batch_size = X, then you can load an eve...
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_...
208
How to not load huggingface datasets into memory Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_...
[ -0.151876986, -0.5147147775, 0.0332164206, 0.4974527955, 0.5210763216, 0.0139151076, 0.118027322, 0.2357332855, 0.4079457223, 0.1879964024, 0.007619652, -0.2028843611, -0.2850469947, 0.3407028317, 0.0886424854, -0.1032491624, 0.053281948, 0.0866248161, -0.5714517832, 0.05235029...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. What I tried: ```python train_dataset = load_dataset('mnist') ``` I don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I g...
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
202
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
What's the feature types of your new dataset after `.map` ? Can you try with adding `features=` in the `.map` call in order to set the "image" feature type to `Array2D` ? The default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
53
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
Hi @lhoestq Raw feature types are like this: ``` Image: <class 'list'> 60000 #(type, len) <class 'list'> 28 <class 'list'> 28 <class 'int'> Label: <class 'list'> 60000 <class 'int'> ``` Inside the `prepare_feature` method with batch size 100000 , after processing, they are like this: Inside Prepare Tr...
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
213
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
Hi @lhoestq # Using Array3D I tried this: ```python features = datasets.Features({ "image": datasets.Array3D(shape=(1,28,28),dtype="float32"), "label": datasets.features.ClassLabel(names=["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]), }) train_dataset = raw_dataset.map(pre...
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
447
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...