html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
# Convert raw tensors to torch format Strangely, converting to torch tensors works perfectly on `raw_dataset`: ```python raw_dataset.set_format('torch',columns=['image','label']) ``` Types: ``` Image: <class 'torch.Tensor'> 60000 <class 'torch.Tensor'> 28 <class 'torch.Tensor'> 28 <class 'torch.Tensor'> Lab...
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
299
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
Concluding, the way it works right now is: 1. Converting raw dataset to `torch` format. 2. Use the transform and apply using `map`, ensure the returned values are tensors. 3. When mapping, use `features` with `image` being `Array3D` type.
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
39
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
What the dataset returns depends on the feature type. For a feature type that is Sequence(Sequence(Sequence(Value("uint8")))), a dataset formatted as "torch" return lists of lists of tensors. This is because the lists lengths may vary. For a feature type that is Array3D on the other hand it returns one tensor. This i...
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
66
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
Okay, that makes sense. Raw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally? Using `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type.
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
53
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2005
Setting to torch format not working with torchvision and MNIST
I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved.
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
32
Setting to torch format not working with torchvision and MNIST Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```p...
[ -0.1273753047, -0.3511653543, -0.0179326311, 0.3536090851, 0.4760026336, 0.0887137353, 0.7330292463, 0.3804347813, 0.0655257925, -0.0380426571, -0.1111534089, 0.3819510639, -0.2329690456, -0.3339343965, 0.0351092033, -0.56149441, 0.2225798368, -0.0998275802, -0.2807980776, -0.0...
https://github.com/huggingface/datasets/issues/2003
Messages are being printed to the `stdout`
This is expected to show this message to the user via stdout. This way the users see it directly and can cancel the downloading if they want to. Could you elaborate why it would be better to have it in stderr instead of stdout ?
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher ...
45
Messages are being printed to the `stdout` In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really ...
[ -0.0470960625, -0.3709275424, -0.0396359488, 0.270612061, 0.2318518311, -0.0572963841, 0.2466088235, 0.1149494946, -0.0417822339, 0.1984181851, 0.1732619703, 0.1642703563, -0.1199179217, 0.368881762, 0.1807005703, 0.1382276416, -0.0852123499, -0.1296654493, -0.3972430825, 0.298...
https://github.com/huggingface/datasets/issues/2003
Messages are being printed to the `stdout`
@lhoestq, sorry for the late reply I completely understand why you decided to output a message that is always shown. The only problem is that the message is printed to the `stdout`. For example, if the user runs `python run_glue.py > log_file`, it will redirect `stdout` to the file named `log_file`, and the message...
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher ...
90
Messages are being printed to the `stdout` In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really ...
[ 0.05412516, -0.4283030629, -0.017841002, 0.1863197088, 0.1776288301, -0.1547812074, 0.3838961124, 0.1615998596, 0.0672425106, 0.2444253117, 0.1907832325, 0.3053063154, -0.142559588, 0.284976095, 0.2610239387, 0.141788736, -0.0724064931, 0.0228821728, -0.4255485535, 0.1076408252...
https://github.com/huggingface/datasets/issues/2000
Windows Permission Error (most recent version of datasets)
Hi @itsLuisa ! Could you give us more information about the error you're getting, please? A copy-paste of the Traceback would be nice to get a better understanding of what is wrong :)
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am...
33
Windows Permission Error (most recent version of datasets) Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from thre...
[ -0.1825570911, 0.1823853403, -0.0400311947, 0.2590482235, 0.0813781247, 0.1336375624, 0.4580341578, 0.0122546703, 0.1864631921, 0.0741284192, -0.0634553432, -0.0112266093, -0.1087960601, 0.1572923511, -0.0323619768, -0.0282509699, 0.1453508139, 0.0480683856, 0.0319975466, 0.139...
https://github.com/huggingface/datasets/issues/2000
Windows Permission Error (most recent version of datasets)
Hello @SBrandeis , this is it: ``` Traceback (most recent call last): File "C:\Users\Luisa\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\builder.py", line 537, in incomplete_dir yield tmp_dir File "C:\Users\Luisa\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\builder....
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am...
230
Windows Permission Error (most recent version of datasets) Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from thre...
[ -0.1825570911, 0.1823853403, -0.0400311947, 0.2590482235, 0.0813781247, 0.1336375624, 0.4580341578, 0.0122546703, 0.1864631921, 0.0741284192, -0.0634553432, -0.0112266093, -0.1087960601, 0.1572923511, -0.0323619768, -0.0282509699, 0.1453508139, 0.0480683856, 0.0319975466, 0.139...
https://github.com/huggingface/datasets/issues/2000
Windows Permission Error (most recent version of datasets)
Hi @itsLuisa, thanks for sharing the Traceback. You are defining the "id" field as a `string` feature: ```python class Sample(datasets.GeneratorBasedBuilder): ... def _info(self): return datasets.DatasetInfo( features=datasets.Features( { "id"...
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am...
73
Windows Permission Error (most recent version of datasets) Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from thre...
[ -0.1825570911, 0.1823853403, -0.0400311947, 0.2590482235, 0.0813781247, 0.1336375624, 0.4580341578, 0.0122546703, 0.1864631921, 0.0741284192, -0.0634553432, -0.0112266093, -0.1087960601, 0.1572923511, -0.0323619768, -0.0282509699, 0.1453508139, 0.0480683856, 0.0319975466, 0.139...
https://github.com/huggingface/datasets/issues/1996
Error when exploring `arabic_speech_corpus`
Actually soundfile is not a dependency of this dataset. The error comes from a bug that was fixed in this commit: https://github.com/huggingface/datasets/pull/1767/commits/c304e63629f4453367de2fd42883a78768055532 Basically the library used to consider the `import soundfile` in the docstring as a dependency, while it'...
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus Error: ``` ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance' Traceback: File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/p...
58
Error when exploring `arabic_speech_corpus` Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus Error: ``` ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance' Traceback: File "/home/sas...
[ -0.259124577, -0.1135888472, -0.0363893025, 0.2228231132, 0.041789569, 0.0414842777, 0.0535817631, 0.3140556514, -0.1731434911, 0.0504595041, -0.2981741726, 0.0969086662, -0.1112823859, -0.0406438522, 0.1065410078, -0.3713723123, 0.0633600503, 0.203241244, 0.0054918919, -0.0132...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
48
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
Hi @dorost1234, I think I can help you a little. I’ve processed some Wikipedia datasets (Spanish inclusive) using the HF/datasets library during recent research. @lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following ...
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
96
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
Thank you so much @jonatasgrosman , I greatly appreciate your help with them. Yes, I unfortunately does not have access to a good resource and need it for my research. I greatly appreciate @lhoestq your help with uploading the processed datasets in huggingface datasets. This would be really helpful for some users l...
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
222
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
Hi @dorost1234, so sorry, but looking at my files here, I figure out that I've preprocessed files using the HF/datasets for all the languages previously listed by me (Portuguese, Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my tests I've used the [wikicorpus](https://www.cs.upc.edu/~nlp/wikic...
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
86
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
Thanks a lot for the information and help. This would be great to have these datasets. @lhoestq <https://github.com/lhoestq> Do you know a way I could get smaller amount of these data like 1 GBtype of each language to deal with computatioanl requirements? thanks On Sat, Mar 6, 2021 at 5:36 PM Jonatas Grosman <notific...
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
189
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
Hi ! As mentioned above the Spanish configuration have parsing issues from `mwparserfromhell`. I haven't tested with the latest `mwparserfromhell` >=0.6 though. Which version of `mwparserfromhell` are you using ? > @lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be ...
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
231
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
Hi @lhoestq! > Hi ! As mentioned above the Spanish configuration have parsing issues from mwparserfromhell. I haven't tested with the latest mwparserfromhell >=0.6 though. Which version of mwparserfromhell are you using ? I'm using the latest mwparserfromhell version (0.6) > That would be awesome ! Feel free t...
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
76
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1994
not being able to get wikipedia es language
Thank you so much @jonatasgrosman and @lhoestq this would be a great help. I am really thankful to you both and to wonderful Huggingface dataset library allowing us to train models at scale.
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
33
not being able to get wikipedia es language Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data...
[ -0.3421071172, 0.0471929237, -0.1062647477, 0.0620171316, 0.2004550546, 0.1777444631, 0.1768745631, 0.3401966393, 0.1373331696, 0.0921181589, 0.4105236232, 0.3346935809, 0.0160008296, 0.3076458573, 0.1113205701, -0.337086916, 0.0556501672, 0.0752221942, -0.1947190017, -0.200176...
https://github.com/huggingface/datasets/issues/1993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
Hi ! That looks like a bug, can you provide some code so that we can reproduce ? It's not supposed to update the original dataset
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original da...
26
How to load a dataset with load_from disk and save it again after doing transformations without changing the original? I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ...
[ -0.2773599327, -0.1424061358, -0.0097127026, 0.2224388272, 0.2347680777, 0.1286466867, -0.0314266682, -0.0836327299, -0.0445270352, 0.0425950624, 0.075677, 0.3508271873, 0.0656663328, 0.2150924355, 0.1894966364, 0.2755099237, 0.3028120697, 0.287147522, -0.3395664692, -0.1187535...
https://github.com/huggingface/datasets/issues/1993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
Hi, I experimented with RAG. Actually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/research_projects/rag/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to comp...
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original da...
91
How to load a dataset with load_from disk and save it again after doing transformations without changing the original? I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ...
[ -0.2981738746, -0.0828650892, 0.026727628, 0.1493992209, 0.3067895174, 0.1228003353, -0.056206774, -0.0614769422, 0.0445273146, 0.0541939661, -0.0652613267, 0.373824805, 0.0215632487, 0.1011980772, 0.1474012882, 0.2532072067, 0.1987997591, 0.2516978383, -0.2688195407, -0.097193...
https://github.com/huggingface/datasets/issues/1993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
@lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache.
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original da...
50
How to load a dataset with load_from disk and save it again after doing transformations without changing the original? I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ...
[ -0.3775414824, -0.1095238999, 0.001331388, 0.2857869565, 0.2583921552, 0.1318970025, -0.0723696798, -0.0919264182, -0.0414135642, -0.0703071728, -0.0545120724, 0.2699585855, 0.0232360195, 0.0732332394, 0.1965225935, 0.2476300597, 0.2620952725, 0.2115163654, -0.4262392223, -0.05...
https://github.com/huggingface/datasets/issues/1993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files. But from your last message it looks like save_to_disk isn't the root cause right ?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original da...
37
How to load a dataset with load_from disk and save it again after doing transformations without changing the original? I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ...
[ -0.3105080724, -0.0680622831, 0.0046884213, 0.2137425393, 0.3093695939, 0.090576224, -0.0354268625, 0.0527886935, -0.0732768625, 0.006163137, 0.0222794786, 0.3242502511, 0.0898375958, 0.1039880365, 0.1767605543, 0.2547084987, 0.2500335276, 0.1937623173, -0.3340505362, -0.026814...
https://github.com/huggingface/datasets/issues/1993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original da...
45
How to load a dataset with load_from disk and save it again after doing transformations without changing the original? I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ...
[ -0.3007223308, -0.0692068338, -0.0054889163, 0.2683241069, 0.1868758351, 0.0893004835, -0.09074191, -0.0502164736, -0.0936115384, -0.0296892803, 0.0845306218, 0.3922943771, 0.0921067819, 0.2010049224, 0.1915562898, 0.2176254541, 0.2755787075, 0.2541548014, -0.2981148362, -0.084...
https://github.com/huggingface/datasets/issues/1993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR!
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original da...
22
How to load a dataset with load_from disk and save it again after doing transformations without changing the original? I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ...
[ -0.343549639, -0.1861004382, -0.0237766225, 0.1979098618, 0.2351960093, 0.0903731436, -0.0758544207, -0.0891272575, -0.0024477071, 0.0613525584, 0.0818682462, 0.2888205051, 0.0089174258, 0.2742180228, 0.2110170126, 0.2311642468, 0.2571014166, 0.3034490347, -0.3161802888, -0.031...
https://github.com/huggingface/datasets/issues/1992
`datasets.map` multi processing much slower than single processing
Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed.
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok...
25
`datasets.map` multi processing much slower than single processing Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc...
[ -0.4131573439, -0.3192062378, -0.0879166052, 0.3574213088, -0.1027499363, 0.0203938819, 0.342405349, 0.1157952026, 0.0579246245, -0.0040250197, 0.0656987578, 0.4142296612, 0.186817944, 0.2118219286, -0.1675526053, 0.0080590574, 0.1905096471, -0.0455830507, 0.174060598, -0.00604...
https://github.com/huggingface/datasets/issues/1992
`datasets.map` multi processing much slower than single processing
I see that many people are experiencing the same issue. Is this problem considered an "official" bug that is worth a closer look? @lhoestq
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok...
24
`datasets.map` multi processing much slower than single processing Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc...
[ -0.3866334856, -0.2725814879, -0.0787934363, 0.3493686914, -0.1060923487, -0.0051659779, 0.3705374002, 0.1268498451, 0.0547807962, -0.016425699, 0.0762920827, 0.4323449731, 0.1645939946, 0.1775635779, -0.1425019652, 0.0690408275, 0.2205798924, -0.0627334192, 0.1990935504, -0.01...
https://github.com/huggingface/datasets/issues/1992
`datasets.map` multi processing much slower than single processing
Yes this is an official bug. On my side I haven't managed to reproduce it but @theo-m has. We'll investigate this !
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok...
22
`datasets.map` multi processing much slower than single processing Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc...
[ -0.4096489549, -0.3040941954, -0.0847012475, 0.3398915231, -0.0897001252, 0.0127339009, 0.352761507, 0.1181462258, 0.0582123809, 0.0094827693, 0.0727025419, 0.4454631209, 0.1630734205, 0.1893293262, -0.1400845945, 0.0302212369, 0.2145637274, -0.0554910563, 0.1930124462, -0.0090...
https://github.com/huggingface/datasets/issues/1992
`datasets.map` multi processing much slower than single processing
Thank you for the reply! I would be happy to follow the discussions related to the issue. If you do not mind, could you also give a little more explanation on my p.s.2? I am having a hard time figuring out why the single processing `map` uses all of my cores. @lhoestq @theo-m
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok...
53
`datasets.map` multi processing much slower than single processing Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc...
[ -0.3987976611, -0.3398563266, -0.091953449, 0.3478133082, -0.1069305539, 0.0302639306, 0.3706740737, 0.1026252583, 0.0623085685, 0.0149739897, 0.0955683589, 0.4472280145, 0.174067989, 0.198543638, -0.1503284425, 0.032466758, 0.2079120129, -0.0139357867, 0.208007589, -0.02249764...
https://github.com/huggingface/datasets/issues/1992
`datasets.map` multi processing much slower than single processing
Regarding your ps2: It depends what function you pass to `map`. For example, fast tokenizers from `transformers` in Rust tokenize texts and parallelize the tokenization over all the cores.
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok...
29
`datasets.map` multi processing much slower than single processing Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc...
[ -0.4439505339, -0.2767179608, -0.0788298398, 0.375110507, -0.0908034071, -0.005079166, 0.3211758435, 0.0643403158, -0.0458023325, -0.009251467, 0.0477654599, 0.4086365104, 0.2179488242, 0.1524447352, -0.14575167, -0.001972422, 0.2254884541, -0.021348143, 0.1467745155, 0.0173892...
https://github.com/huggingface/datasets/issues/1992
`datasets.map` multi processing much slower than single processing
I am still experiencing this issue with datasets 1.9.0.. Has there been a further investigation? <img width="442" alt="image" src="https://user-images.githubusercontent.com/29157715/126143387-8b5ddca2-a896-4e18-abf7-4fbf62a48b41.png">
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok...
19
`datasets.map` multi processing much slower than single processing Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc...
[ -0.4408695102, -0.2572330236, -0.1001467705, 0.3424951732, -0.1136197969, 0.0254861545, 0.3359663785, 0.1372741461, 0.0238075703, -0.0266040042, 0.072843045, 0.4267615378, 0.1649509519, 0.1844826192, -0.1783976257, 0.0735182539, 0.1910468191, -0.0237317272, 0.1997008175, -0.017...
https://github.com/huggingface/datasets/issues/1990
OSError: Memory mapping file failed: Cannot allocate memory
Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multi...
33
OSError: Memory mapping file failed: Cannot allocate memory Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py...
[ -0.2615424395, -0.0372640304, 0.0520399138, 0.6045719981, 0.4538376331, 0.2834010124, 0.145643726, 0.272467047, 0.178551048, 0.1056608111, -0.0581840612, 0.2273861319, -0.1771183163, -0.1459163874, -0.0370905027, -0.1933318675, 0.0840208605, 0.0134712467, -0.5429400206, 0.16046...
https://github.com/huggingface/datasets/issues/1990
OSError: Memory mapping file failed: Cannot allocate memory
It's not trying to bring the dataset into memory. Actually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory. What dataset did you use to get this error ? On what OS are you running ? What's your python and pyarrow version ?
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multi...
56
OSError: Memory mapping file failed: Cannot allocate memory Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py...
[ -0.2615424395, -0.0372640304, 0.0520399138, 0.6045719981, 0.4538376331, 0.2834010124, 0.145643726, 0.272467047, 0.178551048, 0.1056608111, -0.0581840612, 0.2273861319, -0.1771183163, -0.1459163874, -0.0370905027, -0.1933318675, 0.0840208605, 0.0134712467, -0.5429400206, 0.16046...
https://github.com/huggingface/datasets/issues/1990
OSError: Memory mapping file failed: Cannot allocate memory
Dear @lhoestq thank you so much for coming back to me. Please find info below: 1) Dataset name: I used wikipedia with config 20200501.en 2) I got these pyarrow in my environment: pyarrow 2.0.0 <pip> pyarrow 3.0.0 <pip> 3) python versi...
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multi...
88
OSError: Memory mapping file failed: Cannot allocate memory Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py...
[ -0.2615424395, -0.0372640304, 0.0520399138, 0.6045719981, 0.4538376331, 0.2834010124, 0.145643726, 0.272467047, 0.178551048, 0.1056608111, -0.0581840612, 0.2273861319, -0.1771183163, -0.1459163874, -0.0370905027, -0.1933318675, 0.0840208605, 0.0134712467, -0.5429400206, 0.16046...
https://github.com/huggingface/datasets/issues/1990
OSError: Memory mapping file failed: Cannot allocate memory
I noticed that the error happens when loading the validation dataset. What value of `data_args.validation_split_percentage` did you use ?
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multi...
19
OSError: Memory mapping file failed: Cannot allocate memory Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py...
[ -0.2615424395, -0.0372640304, 0.0520399138, 0.6045719981, 0.4538376331, 0.2834010124, 0.145643726, 0.272467047, 0.178551048, 0.1056608111, -0.0581840612, 0.2273861319, -0.1771183163, -0.1459163874, -0.0370905027, -0.1933318675, 0.0840208605, 0.0134712467, -0.5429400206, 0.16046...
https://github.com/huggingface/datasets/issues/1990
OSError: Memory mapping file failed: Cannot allocate memory
Dear @lhoestq thank you very much for the very sharp observation, indeed, this happens there, I use the default value of 5, I basically plan to subsample a part of the large dataset and choose it as validation set. Do you think this is bringing the data into memory during subsampling? Is there a way I could avoid ...
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multi...
133
OSError: Memory mapping file failed: Cannot allocate memory Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py...
[ -0.2615424395, -0.0372640304, 0.0520399138, 0.6045719981, 0.4538376331, 0.2834010124, 0.145643726, 0.272467047, 0.178551048, 0.1056608111, -0.0581840612, 0.2273861319, -0.1771183163, -0.1459163874, -0.0370905027, -0.1933318675, 0.0840208605, 0.0134712467, -0.5429400206, 0.16046...
https://github.com/huggingface/datasets/issues/1990
OSError: Memory mapping file failed: Cannot allocate memory
Methods like `dataset.shard`, `dataset.train_test_split`, `dataset.select` etc. don't bring the dataset in memory. The only time when samples are brought to memory is when you access elements via `dataset[0]`, `dataset[:10]`, `dataset["my_column_names"]`. But it's possible that trying to use those methods to build...
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multi...
121
OSError: Memory mapping file failed: Cannot allocate memory Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py...
[ -0.2615424395, -0.0372640304, 0.0520399138, 0.6045719981, 0.4538376331, 0.2834010124, 0.145643726, 0.272467047, 0.178551048, 0.1056608111, -0.0581840612, 0.2273861319, -0.1771183163, -0.1459163874, -0.0370905027, -0.1933318675, 0.0840208605, 0.0134712467, -0.5429400206, 0.16046...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
It seems that I get parsing errors for various fields in my data. For example now I get this: ``` File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module> main() File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main datasets = load_dataset("csv", data_files=data_files) File ...
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
128
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
Not sure if this helps, this is how I load my files (as in the sample scripts on transformers): ``` if data_args.train_file.endswith(".csv"): # Loading a dataset from local csv files datasets = load_dataset("csv", data_files=data_files) ```
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
35
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
Since this worked out of the box in a few examples before, I wonder if it's some quoting issue or something else.
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
22
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
Hi @ioana-blue, Can you share a sample from your .csv? A dummy where you get this error will also help. I tried this csv: ```csv feature,label 1.2,not nurse 1.3,nurse 1.5,surgeon ``` and the following snippet: ```python from datasets import load_dataset d = load_dataset("csv",data_files=['test.csv']) ...
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
95
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
I've had versions where it worked fain. For this dataset, I had all kind of parsing issues that I couldn't understand. What I ended up doing is strip all the columns that I didn't need and also make the label 0/1. I think one line that may have caused a problem was the csv version of this: ```crawl-data/CC-MAIN-...
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
197
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
Hi @ioana-blue, What is the separator you're using for the csv? I see there are only two commas in the given line, but they don't seem like appropriate points. Also, is this a string part of one line, or an entire line? There should also be a label, right?
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
49
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
Sorry for the confusion, the sample above was from a tsv that was used to derive the csv. Let me construct the csv again (I had remove it). This is the line in the csv - this is the whole line: ```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose ...
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
139
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
Hi, Just in case you want to use tsv directly, you can use the separator argument while loading the dataset. ```python d = load_dataset("csv",data_files=['test.csv'],sep="\t") ``` Additionally, I don't face the issues with the following csv (same as the one you provided): ```sh link1,text1,info1,info2,info3,...
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
292
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
thanks for the tip. very strange :/ I'll check my datasets version as well. I will have more similar experiments soon so I'll let you know if I manage to get rid of this.
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
34
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1989
Question/problem with dataset labels
No problem at all. I thought I'd be able to solve this but I'm unable to replicate the issue :/
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
20
Question/problem with dataset labels Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: `...
[ 0.1789148301, -0.0573230609, 0.0051793531, 0.1343671232, 0.4113537371, 0.3502124548, 0.6419945955, 0.1637074798, -0.1613179296, 0.0723498836, 0.1788543165, 0.0638753474, -0.0607729964, 0.030525703, -0.130230993, -0.1193704456, 0.0890837312, 0.1734791994, 0.1816426963, -0.044466...
https://github.com/huggingface/datasets/issues/1988
Readme.md is misleading about kinds of datasets?
Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)
Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117 You menti...
19
Readme.md is misleading about kinds of datasets? Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/te...
[ -0.10489434, -0.407784611, -0.1252108216, 0.3528690636, 0.2064329088, 0.0773041323, 0.2821690142, -0.0132196387, 0.1014457271, -0.092151098, -0.2971546054, -0.0767040029, -0.0591902919, 0.541465342, 0.5551631451, -0.0550171807, 0.1947558224, 0.0077368575, -0.1606639922, 0.02435...
https://github.com/huggingface/datasets/issues/1983
The size of CoNLL-2003 is not consistant with the official release.
Hi, if you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in ou...
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
78
The size of CoNLL-2003 is not consistant with the official release. Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish ...
[ 0.1660704464, -0.3441514969, -0.0265199859, 0.371819526, -0.3747007549, -0.0020878583, 0.0613713749, -0.0708982125, -0.9394968748, -0.0062832027, 0.1252354681, 0.1568081975, 0.0813579783, 0.0008894681, -0.0184588209, 0.0292957257, 0.1648516804, -0.0400792062, 0.2795079052, -0.1...
https://github.com/huggingface/datasets/issues/1983
The size of CoNLL-2003 is not consistant with the official release.
We should mention in the Conll2003 dataset card that these lines have been removed indeed. If some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them. But IMO the default config should stay the current one (without the `-...
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
73
The size of CoNLL-2003 is not consistant with the official release. Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish ...
[ -0.0790515989, 0.0053056106, 0.0227090288, 0.1767312288, -0.1750466228, 0.0091939494, 0.19263421, 0.1599833965, -1.0088421106, 0.0919715613, 0.116121836, 0.0397599377, -0.0169362314, 0.1187621281, -0.061569199, 0.3097824752, 0.0787528381, 0.0600118078, 0.147918418, -0.119156122...
https://github.com/huggingface/datasets/issues/1983
The size of CoNLL-2003 is not consistant with the official release.
@lhoestq Yes, I agree adding a small note should be sufficient. Currently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them.
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
45
The size of CoNLL-2003 is not consistant with the official release. Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish ...
[ 0.1301076263, 0.0962899998, 0.0394510366, 0.0533670262, -0.2169310898, 0.0075206906, 0.1542738676, -0.0001620095, -0.9731167555, 0.0672689676, 0.20561257, 0.0701418817, -0.0513290241, -0.0842716619, -0.0803269222, 0.3419830203, -0.0675748587, 0.3278605044, 0.2285469472, -0.1227...
https://github.com/huggingface/datasets/issues/1983
The size of CoNLL-2003 is not consistant with the official release.
I added a mention of this in conll2003's dataset card: https://github.com/huggingface/datasets/blob/fc9796920da88486c3b97690969aabf03d6b4088/datasets/conll2003/README.md#conll2003 Edit: just saw your PR @mariosasko (noticed it too late ^^) Let me take a look at it :)
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
30
The size of CoNLL-2003 is not consistant with the official release. Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish ...
[ -0.1177798361, -0.1490900367, -0.1239805967, 0.4063590765, -0.1421679258, -0.1860253215, 0.2695666254, 0.010811206, -0.9765857458, 0.1297612935, 0.0317088179, -0.0465300158, 0.0295427423, 0.1740382761, -0.07360439, 0.1277455688, 0.0628561154, -0.1194889843, -0.0711035579, -0.16...
https://github.com/huggingface/datasets/issues/1981
wmt datasets fail to load
yes, of course, I reverted to the version before that and it works ;) but since a new release was just made you will probably need to make a hotfix. and add the wmt to the tests?
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150...
37
wmt datasets fail to load on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d...
[ -0.3290720284, -0.0584941655, -0.0104082497, 0.5395738482, 0.31043154, 0.0041046697, 0.209779799, 0.0872705206, 0.3129353225, 0.1078863814, -0.0224997792, -0.1256617457, -0.296931684, 0.1887944639, 0.1114162803, 0.1848194599, -0.1364243627, 0.005854344, -0.8216049671, 0.0158685...
https://github.com/huggingface/datasets/issues/1981
wmt datasets fail to load
@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150...
19
wmt datasets fail to load on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d...
[ -0.3290720284, -0.0584941655, -0.0104082497, 0.5395738482, 0.31043154, 0.0041046697, 0.209779799, 0.0872705206, 0.3129353225, 0.1078863814, -0.0224997792, -0.1256617457, -0.296931684, 0.1887944639, 0.1114162803, 0.1848194599, -0.1364243627, 0.005854344, -0.8216049671, 0.0158685...
https://github.com/huggingface/datasets/issues/1981
wmt datasets fail to load
I'll do a patch release for this issue early tomorrow. And yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150...
50
wmt datasets fail to load on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d...
[ -0.3290720284, -0.0584941655, -0.0104082497, 0.5395738482, 0.31043154, 0.0041046697, 0.209779799, 0.0872705206, 0.3129353225, 0.1078863814, -0.0224997792, -0.1256617457, -0.296931684, 0.1887944639, 0.1114162803, 0.1848194599, -0.1364243627, 0.005854344, -0.8216049671, 0.0158685...
https://github.com/huggingface/datasets/issues/1981
wmt datasets fail to load
still facing the same issue or similar: from datasets import load_dataset wtm14_test = load_dataset('wmt14',"de-en",cache_dir='./datasets') ~.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager...
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150...
52
wmt datasets fail to load on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d...
[ -0.3290720284, -0.0584941655, -0.0104082497, 0.5395738482, 0.31043154, 0.0041046697, 0.209779799, 0.0872705206, 0.3129353225, 0.1078863814, -0.0224997792, -0.1256617457, -0.296931684, 0.1887944639, 0.1114162803, 0.1848194599, -0.1364243627, 0.005854344, -0.8216049671, 0.0158685...
https://github.com/huggingface/datasets/issues/1977
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
I sometimes also get this error with other languages of the same dataset: File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table stream = stream_from(filename) File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map ...
Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_l...
55
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 202005...
[ -0.2131416649, -0.3383004069, 0.0015386621, 0.3398338556, 0.2197448462, 0.2354417443, 0.2507149279, 0.2680619955, 0.1814851612, -0.0144519843, -0.0359321311, 0.0364904329, -0.1710566133, 0.1019866467, 0.1627784073, -0.4244245589, 0.1850164682, -0.1324692667, -0.2871963382, -0.0...
https://github.com/huggingface/datasets/issues/1977
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
Hi ! Thanks for reporting Some wikipedia configurations do require the user to have `apache_beam` in order to parse the wikimedia data. On the other hand regarding your second issue ``` OSError: Memory mapping file failed: Cannot allocate memory ``` I've never experienced this, can you open a new issue for this...
Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_l...
84
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 202005...
[ -0.2131416649, -0.3383004069, 0.0015386621, 0.3398338556, 0.2197448462, 0.2354417443, 0.2507149279, 0.2680619955, 0.1814851612, -0.0144519843, -0.0359321311, 0.0364904329, -0.1710566133, 0.1019866467, 0.1627784073, -0.4244245589, 0.1850164682, -0.1324692667, -0.2871963382, -0.0...
https://github.com/huggingface/datasets/issues/1973
Question: what gets stored in the datasets cache and why is it so huge?
Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk. If this is unexpected behavior, would be happy to help run debugging as needed.
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in...
40
Question: what gets stored in the datasets cache and why is it so huge? I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ...
[ -0.0360605754, -0.0662573799, -0.1086275727, 0.5382815003, 0.1768521369, 0.3046649992, -0.0687093884, 0.2563446164, -0.1498933136, -0.1115516722, -0.0044143647, -0.2384365052, -0.1599157602, -0.229541868, 0.1550872624, 0.2438127249, 0.1417649388, -0.0338773765, -0.0882984996, -...
https://github.com/huggingface/datasets/issues/1973
Question: what gets stored in the datasets cache and why is it so huge?
Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as t...
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in...
55
Question: what gets stored in the datasets cache and why is it so huge? I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ...
[ -0.0833512694, 0.0121253347, -0.129175663, 0.5186684132, 0.1493166834, 0.2426840067, -0.0757898465, 0.3019646108, -0.0877330229, -0.0359355286, -0.0295593664, -0.2534342408, -0.0936840177, -0.2008034289, 0.0723078474, 0.1365635842, 0.0901817903, -0.0653724819, -0.0765936598, -0...
https://github.com/huggingface/datasets/issues/1973
Question: what gets stored in the datasets cache and why is it so huge?
Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB.
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in...
32
Question: what gets stored in the datasets cache and why is it so huge? I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ...
[ -0.1613564193, 0.0733158514, -0.1333291233, 0.5806112289, 0.0784173682, 0.2742607892, -0.0560066402, 0.245710358, -0.124497965, -0.0830652267, -0.0408940762, -0.2797769606, -0.1030298099, -0.1785739958, 0.0967971906, 0.2181342095, 0.0879471526, -0.120729737, -0.1814739406, -0.1...
https://github.com/huggingface/datasets/issues/1973
Question: what gets stored in the datasets cache and why is it so huge?
Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon. Also, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them. So by default the cache files stay on your disk when you job ...
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in...
95
Question: what gets stored in the datasets cache and why is it so huge? I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ...
[ -0.0611621961, -0.0334957354, -0.1297228783, 0.512793839, 0.1173945814, 0.2473039925, -0.0104232961, 0.2427642494, -0.0756754428, -0.0384479873, -0.0576454923, -0.2856125236, -0.0628410876, -0.1385335475, 0.1360729635, 0.0736557618, 0.0571288578, -0.0924019367, -0.0630483255, -...
https://github.com/huggingface/datasets/issues/1973
Question: what gets stored in the datasets cache and why is it so huge?
Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5. Feel free to update your Datasets version ```shell pip install -U datasets ``` and see if it better suits your needs.
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in...
34
Question: what gets stored in the datasets cache and why is it so huge? I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ...
[ -0.1255367398, -0.0336778089, -0.14823246, 0.5212610364, 0.1614961624, 0.2426844835, -0.0925319791, 0.2584301233, -0.0802799016, -0.0226320326, -0.0793154463, -0.1993967891, -0.0890248716, -0.1937062144, 0.0600060374, 0.1441114396, 0.1028741077, -0.0664708465, -0.070582144, -0....
https://github.com/huggingface/datasets/issues/1965
Can we parallelized the add_faiss_index process over dataset shards ?
Hi ! As far as I know not all faiss indexes can be computed in parallel and then merged. For example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged. Moreover faiss already works using multith...
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process...
79
Can we parallelized the add_faiss_index process over dataset shards ? I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this wil...
[ -0.2789410353, -0.0627319962, -0.1573459953, 0.1106121838, -0.3771445751, 0.2478515506, 0.2097887248, 0.1062705591, 0.1371022165, 0.1719248295, -0.1766027659, -0.1353846043, 0.3414870501, 0.1139963791, -0.3853701353, 0.2495386302, 0.351341486, -0.103391543, 0.2524389029, 0.1125...
https://github.com/huggingface/datasets/issues/1965
Can we parallelized the add_faiss_index process over dataset shards ?
Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. Then I was thinking of can I calculate the indexes for each shard and combined them...
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process...
60
Can we parallelized the add_faiss_index process over dataset shards ? I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this wil...
[ -0.2849660218, -0.1696968824, -0.0912609622, 0.1680027097, -0.3235115707, 0.3758882582, 0.3347997665, 0.06972038, -0.035886772, 0.1546952128, -0.0876064822, 0.0618727617, 0.3824172914, -0.020076016, -0.3032029569, 0.1434957534, 0.2809818983, -0.0584645979, 0.2126168758, 0.03947...
https://github.com/huggingface/datasets/issues/1965
Can we parallelized the add_faiss_index process over dataset shards ?
@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning mor...
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process...
72
Can we parallelized the add_faiss_index process over dataset shards ? I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this wil...
[ -0.4054990709, -0.1552562416, -0.1257980019, 0.183027789, -0.305123806, 0.2511135042, 0.3391771317, 0.0727011934, 0.1347900927, 0.1768684238, -0.1435732991, 0.1960877925, 0.3504762053, 0.0879318342, -0.2222459614, 0.0771599039, 0.3349465132, -0.073750861, 0.1878648102, 0.144266...
https://github.com/huggingface/datasets/issues/1964
Datasets.py function load_dataset does not match squad dataset
Hi ! To fix 1, an you try to run this code ? ```python from datasets import load_dataset load_dataset("squad", download_mode="force_redownload") ``` Maybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1. Regarding your 2nd point, you're right that loading...
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
170
Datasets.py function load_dataset does not match squad dataset ### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc...
[ -0.353720516, 0.062508136, 0.0333653875, 0.3963990211, 0.5554528236, 0.0057210471, 0.5457585454, 0.3404690027, -0.0983098298, -0.1216232702, -0.1650660038, 0.4508371055, 0.1084151044, -0.0774194002, 0.1859293282, 0.2271615863, -0.1242240593, -0.0123368576, -0.146108225, -0.2893...
https://github.com/huggingface/datasets/issues/1964
Datasets.py function load_dataset does not match squad dataset
Thks for quickly answering! ### 1 I try the first way,but seems not work ``` Traceback (most recent call last): File "examples/question-answering/run_qa.py", line 503, in <module> main() File "examples/question-answering/run_qa.py", line 218, in main datasets = load_dataset(data_args.dataset_name, d...
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
434
Datasets.py function load_dataset does not match squad dataset ### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc...
[ -0.353720516, 0.062508136, 0.0333653875, 0.3963990211, 0.5554528236, 0.0057210471, 0.5457585454, 0.3404690027, -0.0983098298, -0.1216232702, -0.1650660038, 0.4508371055, 0.1084151044, -0.0774194002, 0.1859293282, 0.2271615863, -0.1242240593, -0.0123368576, -0.146108225, -0.2893...
https://github.com/huggingface/datasets/issues/1964
Datasets.py function load_dataset does not match squad dataset
## I have fixed it, @lhoestq ### the first section change as you said and add ["id"] ```python def process_squad(examples): """ Process a dataset in the squad format with columns "title" and "paragraphs" to return the dataset with columns "context", "question" and "answers". """ # print(exa...
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
569
Datasets.py function load_dataset does not match squad dataset ### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc...
[ -0.353720516, 0.062508136, 0.0333653875, 0.3963990211, 0.5554528236, 0.0057210471, 0.5457585454, 0.3404690027, -0.0983098298, -0.1216232702, -0.1650660038, 0.4508371055, 0.1084151044, -0.0774194002, 0.1859293282, 0.2271615863, -0.1242240593, -0.0123368576, -0.146108225, -0.2893...
https://github.com/huggingface/datasets/issues/1964
Datasets.py function load_dataset does not match squad dataset
I'm glad you managed to fix run_qa.py for your case :) Regarding the checksum error, I'm not able to reproduce on my side. This errors says that the downloaded file doesn't match the expected file. Could you try running this and let me know if you get the same output as me ? ```python from datasets.utils.info_...
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
69
Datasets.py function load_dataset does not match squad dataset ### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc...
[ -0.353720516, 0.062508136, 0.0333653875, 0.3963990211, 0.5554528236, 0.0057210471, 0.5457585454, 0.3404690027, -0.0983098298, -0.1216232702, -0.1650660038, 0.4508371055, 0.1084151044, -0.0774194002, 0.1859293282, 0.2271615863, -0.1242240593, -0.0123368576, -0.146108225, -0.2893...
https://github.com/huggingface/datasets/issues/1964
Datasets.py function load_dataset does not match squad dataset
I run the code,and it show below: ``` >>> from datasets.utils.info_utils import get_size_checksum_dict >>> from datasets import cached_path >>> get_size_checksum_dict(cached_path("https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json")) Downloading: 30.3MB [04:13, 120kB/s] {'num_bytes': 30288272, 'ch...
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
29
Datasets.py function load_dataset does not match squad dataset ### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc...
[ -0.353720516, 0.062508136, 0.0333653875, 0.3963990211, 0.5554528236, 0.0057210471, 0.5457585454, 0.3404690027, -0.0983098298, -0.1216232702, -0.1650660038, 0.4508371055, 0.1084151044, -0.0774194002, 0.1859293282, 0.2271615863, -0.1242240593, -0.0123368576, -0.146108225, -0.2893...
https://github.com/huggingface/datasets/issues/1964
Datasets.py function load_dataset does not match squad dataset
Alright ! So in this case redownloading the file with `download_mode="force_redownload"` should fix it. Can you try using `download_mode="force_redownload"` again ? Not sure why it didn't work for you the first time though :/
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
34
Datasets.py function load_dataset does not match squad dataset ### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc...
[ -0.353720516, 0.062508136, 0.0333653875, 0.3963990211, 0.5554528236, 0.0057210471, 0.5457585454, 0.3404690027, -0.0983098298, -0.1216232702, -0.1650660038, 0.4508371055, 0.1084151044, -0.0774194002, 0.1859293282, 0.2271615863, -0.1242240593, -0.0123368576, -0.146108225, -0.2893...
https://github.com/huggingface/datasets/issues/1963
bug in SNLI dataset
Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset. Feel free to remove these examples if you don't need them by using ```python data = data.filter(lambda x: x["label"] != -1) ```
Hi There is label of -1 in train set of SNLI dataset, please find the code below: ``` import numpy as np import datasets data = datasets.load_dataset("snli")["train"] labels = [] for d in data: labels.append(d["label"]) print(np.unique(labels)) ``` and results: `[-1 0 1 2]` version of datas...
39
bug in SNLI dataset Hi There is label of -1 in train set of SNLI dataset, please find the code below: ``` import numpy as np import datasets data = datasets.load_dataset("snli")["train"] labels = [] for d in data: labels.append(d["label"]) print(np.unique(labels)) ``` and results: `[-1 0 1 ...
[ 0.1758221835, -0.2825256586, -0.1035732925, 0.3496331871, 0.2219780236, 0.0471981317, 0.3874209523, 0.118343696, 0.0808188692, 0.2969246209, -0.2027284056, 0.6121096611, -0.1427201033, 0.0958904251, 0.0018212005, -0.0009764511, 0.2330547422, 0.3379024863, 0.2097957134, -0.49668...
https://github.com/huggingface/datasets/issues/1959
Bug in skip_rows argument of load_dataset function ?
Hi, try `skiprows` instead. This part is not properly documented in the docs it seems. @lhoestq I'll fix this as part of a bigger PR that fixes typos in the docs.
Hello everyone, I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/ I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names `t...
31
Bug in skip_rows argument of load_dataset function ? Hello everyone, I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/ I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface p...
[ 0.1184588745, -0.5157741308, -0.0002607238, 0.1341338456, 0.2405335307, 0.3162592947, 0.3996221721, 0.1163903475, 0.1897605658, 0.2767885327, 0.3576671183, 0.3101137877, 0.0523398146, 0.1391008049, 0.212282747, 0.0097794868, 0.3222462535, -0.0971089825, -0.2278703153, -0.142526...
https://github.com/huggingface/datasets/issues/1957
[request] make load_metric api intutive
I agree with this proposal. IMO, `num_process` can also be misleading without reading the docs because this option may seem to leverage `multiprocessing` to compute the final result, which is not the case. @lhoestq @albertvillanova Are you OK with breaking the API for v2.0 and renaming the params as follows: * `num...
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` May I suggest that `num_process` is confusing as it's singular yet expects a plural value and either * be deprecated in favor of `num_processes` which is more intuitive since it's plural as its expected value * or even better...
58
[request] make load_metric api intutive ``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` May I suggest that `num_process` is confusing as it's singular yet expects a plural value and either * be deprecated in favor of `num_processes` which is more intuitive since it's plur...
[ -0.2996609211, -0.6217932105, -0.0776229724, 0.1083437949, 0.3084196746, -0.2696101367, 0.2294079065, -0.1367820948, 0.3890975714, 0.4229172468, 0.0213311817, 0.3503682613, -0.0348374099, -0.0302776881, 0.1602787375, -0.3683152199, -0.1868816465, -0.0273016021, -0.164674297, 0....
https://github.com/huggingface/datasets/issues/1957
[request] make load_metric api intutive
I don't think that's a good idea for 2.0, we may have a new library for metrics anyway. Note that we will need an API that also makes sense for TF users
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` May I suggest that `num_process` is confusing as it's singular yet expects a plural value and either * be deprecated in favor of `num_processes` which is more intuitive since it's plural as its expected value * or even better...
32
[request] make load_metric api intutive ``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` May I suggest that `num_process` is confusing as it's singular yet expects a plural value and either * be deprecated in favor of `num_processes` which is more intuitive since it's plur...
[ -0.2996609211, -0.6217932105, -0.0776229724, 0.1083437949, 0.3084196746, -0.2696101367, 0.2294079065, -0.1367820948, 0.3890975714, 0.4229172468, 0.0213311817, 0.3503682613, -0.0348374099, -0.0302776881, 0.1602787375, -0.3683152199, -0.1868816465, -0.0273016021, -0.164674297, 0....
https://github.com/huggingface/datasets/issues/1956
[distributed env] potentially unsafe parallel execution
You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups. Maybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ?
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issu...
42
[distributed env] potentially unsafe parallel execution ``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each othe...
[ -0.3388578296, -0.4435222745, -0.0121533908, 0.0259140898, -0.0003200319, -0.1188518181, 0.4191742837, -0.0220881831, 0.694543004, 0.334756881, -0.0704571903, 0.2575862706, 0.020857811, 0.0921891034, -0.1329858452, -0.0292747766, -0.06194143, -0.1000038311, -0.1330745667, -0.13...
https://github.com/huggingface/datasets/issues/1956
[distributed env] potentially unsafe parallel execution
Ah, you're absolutely correct, @lhoestq - it's exactly the equivalent of the shared secret. Thank you!
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issu...
16
[distributed env] potentially unsafe parallel execution ``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each othe...
[ -0.2438626289, -0.5634173751, -0.0252404492, -0.0789235681, -0.088331379, -0.0465873741, 0.4310411215, -0.0947356373, 0.6884128451, 0.3382454216, -0.0324465074, 0.2416060269, 0.0294991489, 0.0651076138, -0.0662355497, 0.0515501313, -0.0137191424, -0.0775891468, -0.2323585749, -...
https://github.com/huggingface/datasets/issues/1954
add a new column
Hi not sure how change the lable after creation, but this is an issue not dataset request. thanks
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
18
add a new column Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq Hi not sure how change the lable after creation, but this is an issue not dataset request. thanks
[ -0.2342519164, -0.0549113378, -0.1812906265, -0.0524511486, 0.0165326912, 0.0354574844, 0.3398597538, -0.0366016701, 0.0798498094, 0.1445067525, 0.0107777975, 0.1978343278, 0.0306360573, 0.3719206154, 0.1402909309, 0.1070636064, -0.2483592629, 0.4518116713, -0.1740529388, -0.07...
https://github.com/huggingface/datasets/issues/1954
add a new column
Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188 In the future we'll add support for a more native way of adding a new column ;)
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
40
add a new column Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188 In the future we'll ...
[ -0.2675051689, -0.326751709, -0.2293750793, -0.0306970775, 0.0712796375, 0.1932588071, 0.2296765298, 0.0986113846, 0.2282395363, 0.1450355947, -0.1557606608, 0.1634853333, 0.0027358972, 0.5119233727, 0.2227694988, -0.1781964004, -0.0625529289, 0.198812902, -0.0534241237, 0.0027...
https://github.com/huggingface/datasets/issues/1949
Enable Fast Filtering using Arrow Dataset
Hi @gchhablani :) Thanks for proposing your help ! I'll be doing a refactor of some parts related to filtering in the scope of https://github.com/huggingface/datasets/issues/1877 So I would first wait for this refactor to be done before working on the filtering. In particular because I plan to make things simpler ...
Hi @lhoestq, As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble...
113
Enable Fast Filtering using Arrow Dataset Hi @lhoestq, As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`....
[ -0.0609657317, -0.2426417023, -0.1764557511, -0.0540160835, 0.0382514894, -0.2388273627, -0.1074124202, 0.2039252073, 0.2231047601, -0.1845401525, -0.2857308388, 0.4401740134, -0.1111104339, 0.3817577064, 0.0005042026, -0.2247795314, -0.1491407007, -0.1010930166, -0.1609330475, ...
https://github.com/huggingface/datasets/issues/1949
Enable Fast Filtering using Arrow Dataset
Sure! I don't mind waiting. I'll check the refactor and try to understand what you're trying to do :)
Hi @lhoestq, As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble...
19
Enable Fast Filtering using Arrow Dataset Hi @lhoestq, As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`....
[ -0.1475262791, -0.0889887288, -0.2098283321, -0.0918973088, 0.0624384992, -0.2331447154, -0.0470119081, 0.2564984262, 0.1409167498, -0.1652854979, -0.2117006183, 0.5572013855, -0.1610823274, 0.234077096, -0.0721702352, -0.1115352735, -0.163025558, -0.033403445, -0.1529835612, -...
https://github.com/huggingface/datasets/issues/1948
dataset loading logger level
These warnings are showed when there's a call to `.map` to say to the user that a dataset is reloaded from the cache instead of being recomputed. They are warnings since we want to make sure the users know that it's not recomputed.
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow WARNING:datasets.arr...
43
dataset loading logger level on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42...
[ -0.1510904878, -0.363607198, -0.0170672107, 0.3475189805, 0.4003513455, 0.4622991085, 0.4706436396, 0.1432848126, 0.2523233891, -0.0047668768, 0.0243789218, -0.0168784354, -0.2481700927, -0.1975330263, -0.3672414422, 0.1944216043, -0.1171082929, -0.0153789278, -0.3518668711, -0...
https://github.com/huggingface/datasets/issues/1948
dataset loading logger level
Thank you for explaining the intention, @lhoestq 1. Could it be then made more human-friendly? Currently the hex gibberish tells me nothing of what's really going on. e.g. the following is instructive, IMHO: ``` WARNING: wmt16/ro-en/train dataset was loaded from cache instead of being recomputed WARNING: wmt16...
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow WARNING:datasets.arr...
351
dataset loading logger level on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42...
[ -0.1465869844, -0.2024427652, 0.0129190655, 0.3102097511, 0.4003247619, 0.3999479413, 0.4905202687, 0.2311709076, 0.1243601888, 0.0282331128, 0.0093347831, -0.1472629607, -0.2483209074, -0.1882983148, -0.2508572936, 0.0609390996, -0.1264946014, -0.0315524377, -0.3859975934, -0....
https://github.com/huggingface/datasets/issues/1948
dataset loading logger level
Hey, any news about the issue? So many warnings when I'm really ok with the dataset not being recomputed :)
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow WARNING:datasets.arr...
20
dataset loading logger level on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42...
[ -0.2018210739, -0.2147749513, -0.0191504974, 0.4123525918, 0.4625154436, 0.4148596227, 0.4687034786, 0.127430439, 0.1240054592, -0.0095587233, 0.0188383516, -0.0816432834, -0.2626186013, -0.1146503836, -0.3601995111, 0.256483376, -0.0692486316, -0.0352245457, -0.4280804396, -0....
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
Hi ! The cache at `~/.cache/huggingface/metrics` stores the users data for metrics computations (hence the arrow files). However python modules (i.e. dataset scripts, metric scripts) are stored in `~/.cache/huggingface/modules/datasets_modules`. In particular the metrics are cached in `~/.cache/huggingface/mod...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
84
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.035873726, -0.0009465087, 0.0741713122, 0.1184611022, 0.0837400109, -0.1097528413, 0.1720351577, 0.2317185253, 0.2638721466, 0.1391707808, 0.0678791031, 0.1733183414, -0.2854747176, -0.012178422, 0.1369280815, 0.0551353395, -0.0628576949, -0.0301498752, -0.3259330392, -0.125...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
Thank you for clarifying that the metrics files are to be found elsewhere, @lhoestq > The cache at ~/.cache/huggingface/metrics stores the users data for metrics computations (hence the arrow files). could it be renamed to reflect that? otherwise it misleadingly suggests that it's the metrics. Perhaps `~/.cache/...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
93
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ 0.2474490553, 0.0509266518, 0.0548085123, 0.2140232921, 0.1152905971, 0.1876895577, 0.227650851, 0.3381587565, 0.2461999357, 0.1004049554, 0.1633259058, 0.1200303733, -0.3513060212, -0.1723564416, 0.0993689373, -0.027356524, 0.1081290171, -0.0378127284, -0.2317158431, -0.164974...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
The lock files come from an issue with filelock (see comment in the code [here](https://github.com/benediktschmitt/py-filelock/blob/master/filelock.py#L394-L398)). Basically on unix there're always .lock files left behind. I haven't dove into this issue
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
30
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ 0.0978653282, 0.0017493968, 0.1070124134, 0.1418966502, -0.0015059871, 0.0359288268, 0.2448604107, 0.2713265419, 0.2091091424, 0.1213854998, 0.1688696742, 0.1361202896, -0.3981263936, -0.0413281024, 0.0657580122, 0.0775554329, 0.0212285146, -0.0395922288, -0.316653043, -0.18077...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
are you sure you need an external lock file? if it's a single purpose locking in the same scope you can lock the caller `__file__` instead, e.g. here is how one can `flock` the script file itself to ensure atomic printing: ``` import fcntl def printflock(*msgs): """ print in multiprocess env so that the outpu...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
75
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.0958080515, -0.0942350626, 0.1255572438, 0.0927404538, -0.0868550539, -0.0296323933, 0.2317222357, 0.2375227213, 0.2348065674, 0.1782364547, 0.0034629612, 0.1474096775, -0.2378917038, -0.0178875886, -0.0204138737, 0.1772463918, 0.0075173462, -0.0802014247, -0.3421195745, -0....
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
OK, this issue is not about caching but some internal conflict/race condition it seems, I have just run into it on my normal env: ``` Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py", line 356, in _finalize self.data = Dataset(**reader.read_files([...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
409
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.0576909408, -0.0025042647, 0.1063461304, 0.3295958042, -0.0730611458, 0.0291780587, 0.1474633217, 0.2418281585, 0.3056666851, 0.1599035561, 0.2047880888, 0.07196071, -0.3148393929, -0.1378425509, -0.0102977818, 0.2024057209, 0.0270130634, 0.0037931344, -0.2088185698, -0.1798...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
When you're using metrics in a distributed setup, there are two cases: 1. you're doing two completely different experiments (two evaluations) and the 2 metrics jobs have nothing to do with each other 2. you're doing one experiment (one evaluation) but use multiple processes to feed the data to the metric. In case ...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
173
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.0598983616, 0.0068150586, 0.0844844058, 0.1580612361, 0.0199415423, -0.0320761651, 0.2774343193, 0.2207649201, 0.3088776767, 0.1602412909, 0.0177520085, 0.1031416059, -0.2257235497, -0.0206121989, -0.0592387803, -0.0310600009, -0.0435302258, -0.0667763725, -0.3691175282, -0....
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
Thank you for explaining that in a great way, @lhoestq So the bottom line is that the `transformers` examples are broken since they don't do any of that. At least `run_seq2seq.py` just does `metric = load_metric(metric_name)` What test would you recommend to reliably reproduce this bug in `examples/seq2seq/run_s...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
48
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ 0.0750926808, -0.1511458308, 0.1568089426, 0.1809433997, -0.0645125136, -0.0121278428, 0.3743320704, 0.2074545026, 0.2033591121, 0.0637142807, 0.2323808074, 0.1092877761, -0.3682079613, -0.2463027239, 0.1191040277, -0.159807235, 0.0618004128, 0.0787237734, -0.398302108, -0.1459...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
To give more context, we are just using the metrics for the `comput_metric` function and nothing else. Is there something else we can use that just applies the function to the full arrays of predictions and labels? Because that's all we need, all the gathering has already been done because the datasets Metric multiproc...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
85
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ 0.0360683352, 0.0379048288, 0.0943588987, 0.1253586709, 0.1760240346, 0.05072283, 0.181993857, 0.2530151308, 0.2025466263, 0.251921773, 0.0820864663, 0.1800010353, -0.3751124442, 0.0665907264, 0.1416407377, -0.0772905722, 0.0167335495, 0.0033795102, -0.3133100271, -0.1381400973...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
OK, it definitely leads to a race condition in how it's used right now. Here is how you can reproduce it - by injecting a random sleep time different for each process before the locks are acquired. ``` --- a/src/datasets/metric.py +++ b/src/datasets/metric.py @@ -348,6 +348,16 @@ class Metric(MetricInfoMixin): ...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
452
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.0844155252, -0.0657196268, 0.0652178898, 0.2722368836, -0.0870860815, -0.0326647907, 0.1643526852, 0.2697965801, 0.3311938345, 0.0832056925, 0.0982609913, 0.1941851974, -0.3580646515, -0.0889754444, -0.0156052159, 0.0498046838, 0.0008351952, -0.1240272, -0.4149838686, -0.100...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
I tried to adjust `run_seq2seq.py` and trainer to use the suggested dist env: ``` import torch.distributed as dist metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank()) ``` and in `trainer.py` added to call just for rank 0: ``` if self.is_world_process_z...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
302
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ 0.0881398842, -0.1092866957, 0.1158301532, 0.2856830657, 0.0845730975, 0.0806555673, 0.2349970043, 0.2377155423, 0.2347910851, 0.2356439382, 0.0788628757, 0.1667125374, -0.3739603162, -0.2242654413, 0.0668829158, 0.0503918231, -0.0005046276, -0.1439885944, -0.22103028, -0.19510...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
But no, since ` metric = load_metric(metric_name) ` is called for each process, the race condition is still there. So still getting: ``` ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric in...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
76
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.0743826106, 0.0189334024, 0.098625727, 0.2385593355, 0.0015654586, 0.0210894197, 0.1744566709, 0.2997217178, 0.3662588, 0.1215545982, 0.0272280686, 0.1309888959, -0.3144579232, -0.0137838349, -0.0090083247, -0.0354207493, -0.0174289811, -0.0118404869, -0.2157492191, -0.14070...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
OK, here is a workaround that works. The onus here is absolutely on the user: ``` diff --git a/examples/seq2seq/run_seq2seq.py b/examples/seq2seq/run_seq2seq.py index 2a060dac5..c82fd83ea 100755 --- a/examples/seq2seq/run_seq2seq.py +++ b/examples/seq2seq/run_seq2seq.py @@ -520,7 +520,11 @@ def main(): ...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
233
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.0640709698, -0.0787639916, 0.0965642631, 0.2534792721, -0.0056876931, -0.0212870073, 0.3123221099, 0.2638015151, 0.211024344, 0.2581809461, 0.0451733731, 0.3624680638, -0.352606833, -0.1786396056, 0.0343574323, -0.0347166024, -0.044143755, -0.1339791268, -0.358322978, -0.123...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
I don't see how this could be the responsibility of `Trainer`, who hasn't the faintest idea of what a `datasets.Metric` is. The trainer takes a function `compute_metrics` that goes from predictions + labels to metric results, there is nothing there. That computation is done on all processes The fact a `datasets.Me...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
144
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.0986770317, 0.087464869, 0.0754723325, 0.1241441593, 0.1474276036, -0.0701842383, 0.3525935709, 0.2356341928, 0.1611240059, 0.2098140866, 0.0004215879, 0.214056015, -0.3455355763, 0.1935885996, 0.0740702897, -0.0832459331, -0.0075624692, -0.1163946018, -0.2869691551, 0.00012...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
Right, to clarify, I meant it'd be good to have it sorted on the library side and not requiring the user to figure it out. This is too complex and error-prone and if not coded correctly the bug will be intermittent which is even worse. Oh I guess I wasn't clear in my message - in no way I'm proposing that we use thi...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
139
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ 0.0017204055, 0.0483337082, 0.0741656646, 0.051863201, 0.0032178431, -0.089067474, 0.2733487487, 0.3028242886, 0.2559508979, 0.1578379422, 0.0953508914, 0.1828420609, -0.3078089058, 0.1192957833, 0.0508196466, -0.0088097928, -0.0289926771, -0.1110829934, -0.2872489691, -0.05463...
https://github.com/huggingface/datasets/issues/1942
[experiment] missing default_experiment-1-0.arrow
> The fact a datasets.Metric object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in datasets Yes totally, this use case is supposed to be supported by `datasets`. And in this case there shouldn't be any collision between the metrics. I'm looking into it :) My g...
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
85
[experiment] missing default_experiment-1-0.arrow the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/19...
[ -0.1335230172, 0.037119586, 0.0925974548, 0.2165249735, 0.1155614406, 0.0259056352, 0.2892279923, 0.2465748787, 0.2430669516, 0.1485800147, 0.0223329086, 0.1771803796, -0.2399052382, 0.1134240329, -0.0412851013, -0.0090812147, -0.032152351, -0.0648061261, -0.278406769, -0.05329...