title stringlengths 5 164 | labels list | bodyText stringlengths 0 46.7k |
|---|---|---|
TPUs: crash using torch-xla nightly | [
"bug",
"help wanted",
"won't fix",
"accelerator: tpu"
] | π Bug
If I try to use torch-xla nightly with PyTorch Lightning, I see this crash:
(torch-xla-nightly) zcain@zcain-pl-verify:~/pytorch-lightning/pl_examples/domain_templates$ python computer_vision_fine_tuning.py
Traceback (most recent call last):
File "computer_vision_fine_tuning.py", line 55, in <module>
impor... |
Fix docs typo in starter files | [
"docs"
] | π Documentation
For typos and doc fixes, please go ahead and:
Create an issue.
Fix the typo.
Submit a PR.
Thanks! |
Fix typo in starter files | [
"docs"
] | π Documentation
For typos and doc fixes, please go ahead and:
Create an issue.
Fix the typo.
Submit a PR.
Thanks! |
on_*_batch_transfer hooks should include a dataloader_index parameter | [
"feature",
"help wanted"
] | π Feature
See title
Motivation
Users might want to do different logic based on which dataloader produced the batch in particular, as different dataloaders might contain different batch structures
Pitch
def on_*_batch_transfer(batch, dataloader_idx)
Additional context
docs: https://pytorch-lightning.readthedocs.io/en/l... |
Allow arbitrary val check intervals when using max_steps | [
"feature",
"help wanted"
] | π Feature
Currently when using the max epochs setting we can set val_check_interval to any arbitrary value we like. We cannot do this if max_steps is set. It must be less than one epoch.
Motivation
Constraining this value to be determined as a number of steps and also be less than an epoch makes it difficult to config... |
is auto_find_lr support ddp mode? | [
"bug",
"help wanted"
] | π Bug
Please reproduce using the BoringModel
To Reproduce
Use following BoringModel and post here
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the out... |
Mixed precision not working with v 1.2 | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
After updating to 1.2 from 1.1.1, automatic mixed precision stopped working. Everything's float32 and getting CUDA OOM when I shouldn't get it (with float16 tensors). Worked fine on 1.1.1.
Here's my Trainer args (maybe there's a conflicting combo of args or something):
Trainer(logger=logger,
callbacks=[c... |
Encapsulate logic in DistributedType | [
"help wanted",
"good first issue",
"refactor"
] | π Feature
Any logic to compare different DistributedTypes should be encapsulated by the enum itself.
Motivation
#5743 (comment)
#5743 (comment)
#5970 (comment)
Additional context
pytorch-lightning/pytorch_lightning/utilities/enums.py
Lines 51 to 69
in
0b27147
... |
Parameterized freeze/unfreeze functions for finetuning | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Add an optional parameter to the freeze und unfreeze methods to define which layers should be freezed/unfreezed.
Motivation
Reduce the amount of code to freeze/unfreeze a number of layers.
Pitch
When finetuning networks we often want to freeze/unfreeze a certain number of layers for training. While it's nice... |
DDP does not work well with `torch.no_grad()` in 1.2 | [
"bug",
"help wanted",
"distributed",
"priority: 1"
] | π Bug
My code that uses torch.no_grad stopped working after updating to 1.2 (I'm doing knowledge distillation and using it to protect the teacher model computation). Now it throws
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one
To Reproduce
Here is my modification of... |
API consistency: "val" vs "validation" | [
"feature",
"good first issue",
"discussion",
"design",
"priority: 1"
] | Motivation
As a new user learning PyTorch Lightning, I was surprised at the naming inconsistency between the validation_step() and val_dataloader() hooks. Searching through the codebase, it seems that most methods or getters/setters favor "validation"...
validation_step_end
validation_step
validation_epoch_end
on_vali... |
apex amp not working in 1.2.0 | [
"bug",
"help wanted"
] | π Bug
Got RuntimeError: Invoked 'with amp.scale_loss, but internal Amp state has not been initialized. model, optimizer = amp.initialize(model, optimizer, opt_level=...) must be called before with amp.scale_loss.`
When training with apex amp
No problem in 1.1.8.
Also got amp.load_state_dict when resuming from checkpo... |
on_{validation,test}_epoch_end functions should have an outputs parameter | [
"duplicate",
"feature",
"help wanted",
"design"
] | π Feature
pytorch-lightning/pytorch_lightning/core/hooks.py
Line 255
in
3b0e4e0
def on_validation_epoch_end(self) -> None:
pytorch-lightning/pytorch_lightning/core/hooks.py
... |
pl+wandb: Hanging during "cleaning up ddp environment" when using DDPSpawnPlugin + WandbLogger | [
"bug",
"help wanted",
"won't fix",
"distributed",
"logger",
"priority: 1"
] | π Bug
When using an accelerator that bascially uses a "spawn" start method for multiprocessing (rather than Linux default "fork"), any program that actually spawns a new worker (num_processes>1) seems to hang upon cleanup.
Concretely, I've only seen this when:
Accelerator is either ddp_cpu or ddp_spawn; AND
WandbLogg... |
ModelCheckpoint is not saving top k models | [
"help wanted",
"question",
"docs"
] | π Bug
ModelCheckpoint is not correctly monitoring metric values.
To Reproduce
https://colab.research.google.com/drive/1onBmED7dngP_VwFxcFBMsnQi82KbizSk?usp=sharing
Expected behavior
ModelCheckpoint should save top k models based on x metric, but it currently displays Epoch XXX, step XXX: x was not in top 2 for every e... |
DDPCPU not woking with DDPPlugin(find_unused_parameters=True) in Ver 1.2.0 | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
when using accelerator="ddp_cpu" together with plugins=[DDPPlugin(find_unused_parameters=True)] to create a trainer, the trainer will cause the program tries to re-run its self (and recreate the trainer) and finally then failed at checking gpu devices.
Please reproduce using the BoringModel
trainer = Trainer... |
@auto_move_data unexpectedly uses transfer_batch_to_device from DataModule | [
"bug",
"help wanted",
"docs",
"discussion",
"design"
] | π Bug
I have a LightningDataModule that produces a custom batch object, and I have implemented transfer_batch_to_device to move this data to the GPU. This works.
I have a separate infer method (on my LightningModule) which is invoked with a Tensor, and I wanted to use @auto_move_data to move this Tensor to
the GPU. Ho... |
PyTorchProfiler crashes when emit_nvtx=True | [
"bug",
"help wanted",
"priority: 0",
"callback"
] | π Bug
When training with PyTorchProfiler(emit_nvtx=True), the training stops with the following error :
AttributeError: 'emit_nvtx' object has no attribute 'function_events'
Please reproduce using the BoringModel
-> https://colab.research.google.com/drive/1cqMxMgDVgltluaYZAkn9d2srSDmOu-1p?usp=sharing
To Reproduce
Use ... |
Load models give different results from original | [
"question"
] | β Questions and Help
What is your question?
What is the right way to retrieve a trained model and use it after load
class NER_Model(pl.LightningModule):
def __init__(self, hyperparams, model_parameters, dataset_infos, extra_infos):
super(NER_Model, self).__init__()
# ---------- hyperparams
... |
Inference AUROC on valid set using ModelCheckpoint saved ckpt does not equal valid AUROC from training | [
"bug",
"help wanted",
"checkpointing"
] | π Bug
During training, I have a ModelCheckpoint callback that saves the top 5 models based on valid_AUC computed at the end of validation phase using multiclass_auroc() function. The callback saves the .ckpt file as epoch=X_valid_AUC=0.XXXX.ckpt. When I load the ckpt and run trainer.test() on the same validation set u... |
Prevent deprecated documentation from showing up in search engine top results | [
"good first issue",
"docs",
"let's do it!"
] | π Documentation
Currently when we search keywords like "pytorch lightning trainer" we get results that point to very very very outdated docs!
It should instead point to the latest stable documentation pages.
Investigate these options here:
https://docs.readthedocs.io/en/stable/faq.html#how-can-i-avoid-search-results-... |
Slurm GPUs Not Properly Detected | [
"bug",
"help wanted"
] | π Bug
I'm using slurm and the GPUs don't seem to be detected properly, the doc at https://pytorch-lightning.readthedocs.io/en/stable/clouds/slurm.html implies it's as simple as passing the number of GPUs and nodes to the trainer with a suitable sbatch script. But when I do this for a single node, 2 GPUs, I get the fol... |
logging error with 1.2.0 version | [
"help wanted",
"docs",
"working as intended",
"logging"
] | π Bug
if I use logger named "lightning", logger print log twice.
It happens with pytorch-lightning 1.2.0 version and doesn't happend with 1.1.8 version.
This is reproduced easily like below.
Error Example
Type "help", "copyright", "credits" or "license" for more information.
>>> import pytorch_lightning as pl
>>> imp... |
Pickle error and OOM when upgrading to 1.2.0 | [
"question",
"won't fix"
] | When upgrading from 1.1.6 to 1.2.0, I notices 2 changes
Significant increase of gpu memory
Pickle error in my module class (the object metric_fn is sure not pickable but the same code worked fine in 1.1.6)
Do you have an idea what changes in 1.2.0 may cause the issues ?
Any suggestion for the memory problem ?
My pseu... |
val_check_interval equivalent for training loss logging | [
"won't fix"
] | I very much like the feature of val_check_interval. I use it for logging the validation loss when the epochs are too large. However, I would like to log the training loss at the same steps as well so that for every validation loss entry in my logs, I also have a corresponding training loss entry.
Is there an easy way t... |
T5ForConditionalGeneration to Trainer.fit() | [
"bug",
"help wanted",
"priority: 0"
] | Model I am using (MT5ForConditionalGeneration ('google/mt5-base')):
The problem arises when passing
pl.LightningDataModule with T5ForConditionalGeneration to Trainer.fit()
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/model_connector.py in copy_trainer_model_properties(self, model)
Attribu... |
Add verbose option to prog_bar to print summary of every epoch | [
"feature",
"help wanted",
"good first issue",
"won't fix"
] | Similar to ModelCheckpoint(verbose=true), we can add verbose_progress_bar trainer flag, to print the logs to the screen after every epoch |
Latest Lightning does not support multiple callbacks that stop | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
In the latest version of lightning, you do not seem to be able to have multiple callbacks which can stop.
Please reproduce using the BoringModel
If you have mulitple callbacks which can do early stopping, only the last one can be active.
Create a callback with early stopping, MyStoppingCallback(). Add it, the... |
Pure PyTorch vs Lightning is faster with CPU small toy example | [
"won't fix",
"docs",
"working as intended"
] | π Bug
Recently I found that Lightning runs much slower than simple PyTorch code.
Code using Lightning:
import os
import math
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import MNIST
from torchvision import trans... |
validation_epoch_end does not contain all `validation_step` outputs when using DDP | [
"help wanted",
"question",
"won't fix",
"distributed",
"priority: 1"
] | π Bug
When using DDP, valdiation_epoch_end does not receive all of the outputs from validation_step. For instance, running the following script:
import pytorch_lightning as pl
import torch
from torch import nn
class Module(pl.LightningModule):
def __init__(self):
super().__init__()
self.linear = ... |
MLFlow Logger Makes a New Run When Resuming from hpc Checkpoint | [
"bug",
"help wanted",
"checkpointing",
"environment: slurm",
"priority: 2",
"logger: mlflow"
] | π Bug
Currently the MLFlowLogger creates a new run when resuming from an hpc checkpoint, e.g., after preemption by slurm and requeuing. Runs are an MLFlow concept that groups things in their UI, so when resuming after requeue, it should really be reusing the run ID. I think this can be patched into the hpc checkpoint ... |
Training stuck running on the SLURM cluster with multiple gpus per node | [
"bug",
"won't fix",
"waiting on author",
"distributed",
"environment: slurm"
] | π Bug
I try to train a model across multiple nodes on a slurm cluster, where each node has two gpus. Therefore, I use the following flags in the trainer:
trainer = pl.Trainer(
gpus=2, num_nodes=2,
accelerator='ddp',
max_epochs=2
)
and submit the job with sbatch run_training.sh . However, I end u... |
incorrect usage of detach/cpu/to | [
"bug",
"help wanted"
] | π Bug
Incorrect use of detach() and cpu() during fixing #4592.
Please reproduce using the BoringModel
You cannot really.
To Reproduce
Use following BoringModel and post here
The fix for #4592 has good intentions but an obvious bug slipped through.
It is easy to understand but hard to test it, so let's rely on common ... |
cli: Confused on (str, int, List[int]) variants for argparse for --gpus flag? | [
"bug",
"help wanted",
"question",
"docs",
"priority: 1"
] | π Bug
A colleague (@siyuanfeng-tri) and I sometimes get confused on how the --gpus flag is to be interpreted by argparse. I see the following docs:
https://pytorch-lightning.readthedocs.io/en/1.2.1/advanced/multi_gpu.html#select-gpu-devices
But we're sometimes confused about when argparse interpretation will either as... |
TPU: Crashes using trainer.test() | [
"bug",
"help wanted",
"priority: 0",
"accelerator: tpu"
] | π Bug
trainer.test() does not work with TPUs.
There are a few different ways we've seen it crash.
1. Looks like a call to barrier() coming from __test_using_best_weights
RuntimeError Traceback (most recent call last)
<ipython-input-17-587e2a9e3858> in <module>
----> 1 trainer.test(datamodu... |
Improve verbosity and progress bar display for early stopping | [
"feature",
"help wanted",
"won't fix",
"priority: 1"
] | π Feature
When using EarlyStopping, it would be be great if the progress bar added two values, like "espatience" (the number of epochs of patience left before it might stop early) and "estarget" (which is the objective including min_delta, that must be achieved to avoid early stopping).
Motivation
EarlyStopping verbos... |
Improve documentation for EarlyStopping patience parameter | [
"docs"
] | π Documentation
EarlyStopping API docs currently reads:
patience (int) β number of validation epochs with no improvement after which training will be stopped. Default: 3.
However, this is quite confusing because 'validation epochs' is not really a term used much in the documentation, and this leads the user to believ... |
Calling the callback on_before_accelerator_backend_setup gives an error. | [
"bug",
"help wanted"
] | I am trying to use this callback with the following code.
from pytorch_lightning.callbacks import Callback
class InitCallback(Callback):
def on_before_accelerator_backend_setup(trainer, pl_module):
print(trainer.model)
print(trainer.global_rank)
exit()
It gives the following error:
Type... |
Error in PL 1.2 when loading models that calls save_hyperparameters and is trained using PL <1.2 | [
"bug",
"help wanted",
"priority: 0",
"checkpointing"
] | π Bug
After updating to PL 1.2 LightningModule.load_from_checkpoint(checkpoint), using a checkpoint from a model trained using PL 1.1.6, fails with the following AttributeError:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/larshbj/Library/Caches/pypoetry/virtualenvs/vake-TBRj... |
all_gather for TPU doesn't support backward gradients. | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"accelerator: tpu"
] | Currently, we rely on AllGatherGrad to compute gather for GPUs.
TODO:
[] Extend this class to support TPU
[] Add tests |
Introduce a SecurePlugin | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Motivation
Pitch
PR #6212 will add check if PySyft is present, which we would like to avoid.
The idea is to introduce a secure plugin, which will handle those checks instead.
Alternatives
Additional context |
Efficientnet_pytorch breaks on 'ddp' | [
"bug",
"help wanted",
"won't fix",
"distributed"
] | π Bug
I'm trying to train an Efficientnet-B0 model using the implementation from https://github.com/lukemelas/EfficientNet-PyTorch repository. Now even though it works fine in 'dp' mode it breaks on 'ddp' or 'ddp2' and i keep getting the following error :
RuntimeError: Expected to have finished reduction in the pr... |
Default process group is not initialized in setup() function | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
Default process group is not initialized in Datamodule setup() function.
This is a BC breaking with PL >= 1.2.0
With, PL == 1.1.8 this code works.
Reproduce notebook: https://colab.research.google.com/drive/1AHadRi0Bly9OnzrJFv8XmS2T9Y5zklvg?usp=sharing
Expected behavior
fit() should be work.
Environment
* CUDA:... |
DDP + mixed precision + sharded not working on PL 1.2.1 | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
After upgrading to pytorch-lightning 1.2.1, training with DDP + 16 bit precision + sharded is broken, as the training loss doesn't go down (stays around 2.31). Without the sharded option it seems to work.
To Reproduce
from argparse import ArgumentParser
import torch
from torch.nn import functional as F
import p... |
auto_scale_batch_size fails with datamodule in pl==1.2.*, succeeds in pl==1.1.8 | [
"bug",
"help wanted",
"trainer: tune",
"priority: 1"
] | π Bug
Running trainer = pl.Trainer(auto_scale_batch_size=True); trainer.tune(model, datamodule=dm) succeeds in pl==1.1.8, but fails in pl==1.2.* (tested both 1.2.0 and 1.2.1) with error:
Traceback (most recent call last):
File "train.py", line 42, in <module>
trainer.tune(model, datamodule=dm)
File "/home/ubun... |
Support save_hyperparameters() to checkpoints without sending to logger | [
"feature",
"help wanted"
] | π Feature
self.save_hyperparameters() is an awesome simple way to save input arguments with checkpoints such that when loading a checkpoint the module will be constructed with the same parameters. The method also seems to log the hyperparameters to the experiment logger, which is not always desired. It would be great ... |
trainer.training_type_plugin.broadcast doesn't seem to work properly | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
Please reproduce using the BoringModel
To Reproduce
Use following BoringModel and post here
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the out... |
[RFC] Gradient clipping hooks in the LightningModule | [
"feature",
"help wanted",
"refactor",
"design"
] | π Feature
Add clipping hooks to the LightningModule
Motivation
It's currently very difficult to change the clipping logic
Pitch
class LightningModule:
def clip_gradients(self, optimizer, optimizer_idx):
...
The default implementation would be the same as we currently provide, where the trainer's clipping f... |
dp + manual_optimization is not working on PL 1.2.1 | [
"bug",
"help wanted",
"strategy: dp",
"priority: 2"
] | π Bug
dp + manual optimization is not working on PL 1.2.1
I am setting automatic optimization = False in my model and giving Trainer() accelerator='dp'
Expected behavior
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 644, in run_train
s... |
trainer.global_step print repetitive steps when there are more that one gradient_accumulation_steps | [
"bug",
"help wanted"
] | I am using the latest pytorch_lighting version from conda. Seems in this version, trainer.global_step change with gradient accumulation steps. Also, the number of entire training steps. In the previous version, this wasn't the case.
What is the correct approach if I want to do some task in every 5000 training steps? ... |
DDP: Multiple processes try to create the logger directory tree | [
"bug",
"help wanted",
"distributed",
"priority: 1"
] | π Bug
An user from our supercomputing center run into an issue which I think turned out to be a bug in PyTorch-Lightning.
When using the DDP accelerator together with a logger, multiple processes will try creating the logger directory tree, causing some errors about already existing directories or files.
Troubleshooti... |
trainer.fit must be called before trainer.predict else predict fails with Misconfiguration Exception | [
"bug",
"help wanted"
] | π Bug
I am trying to use the new predict API of the trainer by loading a checkpoint. But it seems that trainer.fit must be called before trainer.predict else the config validator fails:
GPU available: False, used: False
TPU available: None, using: 0 TPU cores
Traceback (most recent call last):
File "pytorch-lightnin... |
Don't `lr_scheduler.step()` in manual optimization | [
"feature",
"help wanted"
] | π Feature
Currently, lr_scheduler.step() is called in both manual and automatic optimization. We should let users call lr_scheduler.step() manually in manual optimization for ultimate flexibility. Requested by @carmocca. |
ssim functional metric not working with 16 bit precision | [
"bug",
"duplicate",
"help wanted"
] | π Bug
ssim functional metric doesn't work with native 16 bit precision
Please reproduce using the BoringModel
https://colab.research.google.com/drive/1Wzxjq-oQavT-kg9go9Ti-AcCEhuir7H9?usp=sharing
if you remove 16 bit precision from trainer it works but with 16 bit precision it gives the following error:
TypeError: Exp... |
Fitting hangs at "cleaning up ddp environment..." when tpu_cores=8 | [
"bug",
"help wanted",
"priority: 0",
"accelerator: tpu"
] | π Bug
When setting tpu_cores of Trainer to 8, fitting hangs at "cleaning up ddp environment...".
Please reproduce using the BoringModel
https://colab.research.google.com/drive/1tJswNaT0I-GrGsi6ngwwRDUmeFY1pFr3?usp=sharing
To Reproduce
Run above URL notebook.
Expected behavior
Trainer.fit ends normally .
Environment
P... |
fast_dev_run fail on log_hyperparams | [
"bug",
"help wanted",
"logger"
] | π Bug
Issue when running: fast_dev_run=True
"TypeError: log_hyperparams() takes 2 positional arguments but 3 were given"
To Reproduce
When using the following: Where self.hp_metrics is a list of strings where each string is an available metric that is being logged, example "accuracy/val".
def on_train_start(self):
... |
Make verbose=False prevent showing "Saving latest checkpoint..." | [
"feature",
"help wanted",
"good first issue",
"let's do it!"
] | π Feature
Currently Lightning prints:
[lightning][INFO] - Saving latest checkpoint...
when running training with ModelCheckpoint(verbose=False).
Not sure if it's a bug or intended...
Can we make it to not log that when passing verbose=False? |
Find_unused_parameter=false is causing multi GPU to hang | [
"bug",
"help wanted",
"priority: 0",
"distributed",
"design"
] | We need to decide if we should:
a. set the default to True
b. add more clarification in the docs/warning
should it be a property of the LightningModule, not the trainer?
see more context here:
#5604 |
self.device does not return the correct device in DataParallel | [
"bug",
"help wanted",
"strategy: dp"
] | π Bug
The self.device property does not get updated in the replicas of DataParallel.
Please reproduce using the BoringModel
### To Reproduce
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
def __init__(self, size... |
Cannot Import LearningRateLogger | [
"question"
] | π Bug
I have been working with the same code in Colab for some time with no issues. Since today, PL could not be imported (#6415). As per linked thread, this is resolved with installing from master, however the following problem still persists:
import pytorch_lightning as pl
from pytorch_lightning.callbacks import Lea... |
[RFC] Create explicit setup and teardown hooks for each stage on the Lightning and DataModules | [
"feature",
"won't fix",
"design"
] | π Feature
LightningModules and DataModules currently support a setup API which takes an optional stage argument.
#6386 addresses some issues in the setup/teardown lifecycle, so I was wondering if we should take this further (#6401)
Motivation
Pros of making the separate hooks for each stage:
Clarity in the API that h... |
trainer.test is breaking when a model is not passed | [
"bug",
"help wanted",
"priority: 0"
] | From the docs:
# (1) load the best checkpoint automatically (lightning tracks this for you)
trainer.test()
Trainer.test should use the best checkpoint when a model isn't provided, and currently, that doesn't work. |
ImportError: cannot import name 'Batch' from 'torchtext.data' (/usr/local/lib/python3.7/dist-packages/torchtext/data/__init__.py) | [
"bug",
"help wanted"
] | Not able to import PyTorch lightning to google colab |
Formalize progress tracking inside of the trainer internals | [
"feature",
"help wanted",
"discussion",
"refactor",
"design"
] | π Feature
We should better enforce progress tracking across these dimensions:
Stage: training, evaluation (validation and test), and prediction loops
Granularity: batches, steps, epochs
batches vs steps: steps = optimizer steps (parameter updates) and applies to training loop only. this will differ from batches when ... |
Continuing training resets logger step | [
"bug",
"help wanted",
"won't fix",
"priority: 1"
] | π Bug
I am running Pytorch Lightning in a federated learning setting. Therefore I have several models and I need to instantiate a Trainer object for one model multiple times. Every time I do that the associated logger resets the epoch and logs the metrics on top of each other in the plots. Since instantiating a new Tr... |
Early Stopping Min Epochs | [
"feature",
"help wanted",
"won't fix",
"design",
"callback"
] | π Feature
The EarlyStopping callback should allow the user to specify a minimum number of epochs to run before early stopping is triggered
Motivation
In many modern training loops, the learning rate is varied in some kind of cycle. For example, in the Transformer paper, they warm up the learning rate by increasing it ... |
[DeepSpeed] `MPI world size 1 does not match torch world size 2` when launched with >1 GPU. | [
"bug",
"priority: 1"
] | π Bug
When trying to train a model with PyTorch Lightning and Deepspeed, the process fails with an assertion error:
AssertionError: MPI world size 1 does not match torch world size 2.
The number 2 in this case is replaced with the number of GPUs specified by the PyTorch Lightning Trainer.
Please reproduce using the Bo... |
[RFC] Support checkpointing multiple callbacks of the same type | [
"feature",
"help wanted",
"design",
"callback"
] | π Feature
Currently when dumping the checkpoint dict, we overwrite callback states if there are multiple callbacks of the same type:
pytorch-lightning/pytorch_lightning/trainer/callback_hook.py
Lines 209 to 224
in
1c013b4
def ... |
Global step always zero after loading checkpoint | [
"bug",
"help wanted",
"priority: 0",
"checkpointing"
] | I am saving checkpoints inside my module using self.trainer.save_checkpoint(path). I am able to load these checkpoints into the model using MyModel.load_from_checkpoint(path) and trainer using Trainer(resume_from_checkpoint=path). However, both the resulting model and trainer have global_step=0 regardless of the step w... |
LayerSummary does not work with ScriptModules | [
"bug",
"help wanted"
] | π Bug
I am trying to do finetuning on a pre-trained model which is saved as TorchScript. Unfortunately, it looks like Lightning's LayerSummary does not support scripted modules:
To Reproduce
Run
import torch
from torch import nn
import pytorch_lightning as pl
class Module(pl.LightningModule):
def __init__(self)... |
Multiple Trainloaders in a Sequential Mode. | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Add new mode to multiple_trainloader_mode of the Trainer in order to support training on multiple dataloaders in a sequential way.
Motivation
Say we want to train on two dataloaders A, and B.
Currently, the modes support max/min size of all trainloaders and provide a list/dict containing batches from loade... |
AttributeError: 'BoringModel' object has no attribute 'require_backward_grad_sync' when using manual optimization with TPU | [
"bug",
"help wanted",
"priority: 0",
"accelerator: tpu"
] | π Bug
Hello!
When using manual optimization with TPU, I am getting an AttributeError: 'BoringModel' object has no attribute 'require_backward_grad_sync'. When I replace self.manual_backward(loss) with loss.backward() things seem to work, but I am not sure if this is a safe or sustainable workaround. It seems the error... |
Training stuck at 0% at first epoch | [
"bug",
"help wanted",
"priority: 1"
] | Hello,
Training gets stuck at 0% at the very first epoch whether using fast_dev_run or not. Not error reported.
How can I debug that?
Specs:
cudatoolkit 10.2.89 hfd86e86_1
python 3.8.8 hdb3f193_4
pytorch 1.6.0 py3.8_cuda10.2.89_c... |
relative refresh rate in progress bar | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Dynamic refresh rate, define float as percent and default 0.01
Motivation
as a user refresh, every percent of progress is mostly enough
Pitch
The validation in the notebook never triggers progress as default is 20 and this process has only 2 steps
Alternatives
Additional context |
"Advanced" profiler not working | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Hello Guys,
I am having issues with the advanced profiling option in lightning. Here is the colab file documenting the issue on a simple model.
The issue happens, whenever I set stochastic_weight_avg=True.
Also, I have my complicated setup where I faced issue regarding Advanced profiler but it was a different on... |
Unable to use any scheduler ('scheduler' object has no attribute 'param_groups') [BUG] | [
"bug",
"help wanted",
"priority: 0"
] | I am trying to convert a number of Pytorch repos into Lightning, but I am unable to use any scheduler. I have tried both the origin custom made schedulers and official Pytorch schedulers, and always get the same 'scheduler' object has no attribute 'param_groups'
Traceback (most recent call last):
File "tools/train_ne... |
Option of doing optimizer.zero_grad(set_to_none=True). | [
"feature",
"help wanted"
] | π Feature
An option to set gradients to None instead of zero. This was introduced in PyTorch 1.7. Documentation.
Motivation
Training Speed improvements, as discussed by PyTorch here.
Pitch
I changed Line 1400 of https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/core/lightning.py.
Fro... |
Should truncated_bptt_steps take effect during validation phase? | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
It appears that the Trainer flag truncated_bptt_steps doesnβt affect the validation phase. Should it? The problem Iβm running into is that I need truncated_bptt_steps to virtually increase the length of the sequence I can fit into my GPU memory, but this purpose is defeated when the validation step doesnβt als... |
Loading a model from PL 1.2 that was saved in PL 1.1 breaks | [
"bug",
"help wanted",
"waiting on author",
"checkpointing",
"priority: 1"
] | π Bug
I saved a model trained with PL 1.1 from an environment with PL 1.2 and it breaks. There are some PL specific objects that get pickled into the checkpoint. This shouldn't happen. See error below:
Traceback (most recent call last):
File "scripts/train_bart_seq2seq_augmented_kilt.py", line 45, in <module>
mo... |
[BUG] `auto_move_data` does not work with `DataParallel` | [
"bug",
"help wanted",
"strategy: dp",
"priority: 1"
] | π Bug
In case your forward function is wrapped with auto_move_data it will not work with DataParallel because it will try to send the data to self.device which in dataparallel is always the main device.
i.e. the following won't work with accelerator="dp" (and probably also with "ddp"):
class Module(pl.LightningModule)... |
Training stalls with DDP multi-GPU setup | [
"bug",
"help wanted",
"priority: 0",
"waiting on author"
] | π Bug
My training / validation step gets hung when using ddp on 4-GPU AWS instance.
Usually it happens at the end of the first epoch, but sometimes in the middle of it.
Code runs fine on 1 GPU.
My model checkpoint is a very basic set up
checkpoint_callback = pl.callbacks.ModelCheckpoint(
args.checkpointdir,
... |
CUDA memory leak after batch size finder | [
"bug",
"help wanted"
] | π Bug
Using transformers + AdamW optimizer + batch size finder results in ~2 - 3 GB GPU memory not being freed after
trainer.tune (for xlm-roberta-base). This causes OOM issues on a subsequent call of trainer.fit.
I suspect that the state of the AdamW optimizer causes this issue.
Please reproduce using the BoringMode... |
Trainer.predict(), LightningModule.predict(), and LightningDataModule.predict_dataloader() are not in documentation | [
"docs"
] | π Documentation
Currently the Trainer.predict() method, along with the LightningModule.predict() and LightningDataModule.predict_dataloader() hooks, are not mentioned anywhere in the documentation so users don't know about their existence unless they've read through the source files. This led to confusion on my part s... |
trainer.test() fails when using both auto_lr_find and ModelPruning | [
"bug",
"help wanted",
"won't fix",
"priority: 1"
] | π Bug
Hi, wasn't able to reproduce properly with BoringModel, but did with small CIFAR10 example.
Description
trainer.test() errors out when I use both ModelPruning callback and auto_lr_find. Disabling either of these makes trainer.test() work again. I'm using mainline of pytorch-lightning.
Example
import os
import t... |
during training, model not able to save checkpoint at the end of every epoch | [
"bug",
"help wanted"
] | π Bug
when I try to train the model, the model not saving checkpoint at the end of every epoch
PyTorch Version : 1.1.4
OS: Linux 18.04
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):
Python version: 3.6
CUDA/cuDNN version: cuda 10.0
GPU models and configuratio... |
CodeCarbon monitor callback | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Carbon emissions monitoring.
Motivation
https://mlco2.github.io/impact/#about
Pitch
Add a callback like lr_monitor, but to monitor estimated emissions. Simplest way would be to use the codecarbon library, although this wouldn't support distributed training (maybe possible, could at least log emissions for ... |
performance drop from v1.1.8 to >= v1.2.0 when using metrics | [
"bug",
"help wanted",
"working as intended",
"priority: 2"
] | π Bug
Hi, I'm observing longer epoch times for pytorch lightning versions >= 1.2.0 when using metrics in the training or validation steps. Depending on the model, dataset etc I observed an increase of 10 seconds per epoch.
I tested it using Accuracy() and ConfusionMatrix() and the impact is larger for the former.
Remo... |
ModelCheckpoint not accepting argument 'filepath' | [
"bug",
"help wanted"
] | π Bug
When filepath is used as an argument in ModelCheckpoint it says TypeError: __init__() got an unexpected keyword argument 'filepath'
Link for the colab notebook: https://colab.research.google.com/drive/1-ECmP0JTPYXFSDt__K93Cm4wzW7OdyXS?usp=sharing
Expected behavior
Environment
CUDA:
GPU:
available: Fa... |
Calling trainer.test() when using fast_dev_run throws confusing error | [
"bug",
"help wanted"
] | π Bug
Calling trainer.test() when using fast_dev_run throws confusing error:
Traceback (most recent call last):
File "main.py", line 89, in <module>
tra... |
Revisit CONTRIBUTING.md | [
"docs",
"priority: 1"
] | π Documentation
Update CONTRIBUTING.md to include detailed steps and missing guidelines Pytorch-Lightning developer machine setup. The setup instructions are currently under "Testing" section, and they don't have information about all the necessary dependencies. We should rename the section to make it easy for develop... |
Wall time auto-resubmit not working | [
"bug",
"help wanted",
"won't fix",
"environment: slurm",
"priority: 1"
] | Hi :)
I fear the wall time auto-resubmit is not working for me.
I'm using this submit script:
#SBATCH --job-name=NORA_test_MNIST
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=12000
#SBATCH --gres=gpu:1
### #SBATCH --mem-per-gpu=11000
#SBATCH -o /home/%u/%x-%j-on-%N.out
#SBATCH -e /home/%u/%x-%j-on-%N.err
#... |
ddp: Consider warning / failing fast if single proc relies on `len(gpus) >= 8` for CUDA, due to peer-to-peer resource limitations? | [
"feature",
"help wanted",
"won't fix",
"distributed"
] | π Feature
If using accelerator="ddp" (or any other backend that may trigger this error), ideally there's a warning and/or error about possibly encountering this issue, e.g.:
WARNING: You are requesting > 8 GPU devices be shared via CUDA's peer-to-peer protocol, but it generally
supports <= 8 devices. Please consider d... |
External MLFlow logging failures cause training job to fail | [
"bug",
"help wanted",
"won't fix",
"logger",
"3rd party"
] | π Bug
I am using a pytorch_lightning.loggers.mlflow.MLFlowLogger during training, with the MLFlow tracking URI hosted in Databricks. When Databricks updates, we sometimes lose access to MLFlow for a brief period. When this happens, logging to MLFlow fails with the following error:
urllib3.exceptions.MaxRetryError: HTT... |
Add illustration of hooks in the LightningModule | [
"feature",
"docs",
"priority: 0"
] | Add a code snippet showing when each of the LightningModule hooks is being called. |
pl.Trainer.add_argparse_args() does not work with argparse subparsers | [
"bug",
"help wanted"
] | π Bug
pytorch-lightning/pytorch_lightning/utilities/argparse.py
Lines 157 to 160
in
7114c2d
parser = ArgumentParser(
parents=[parent_parser],
... |
`Trainer.predict` stops gradients globally until `torch.set_grad_enabled(True)` is called | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
After Trainer.predict is called, gradients are never turned back on. This can cryptically impact tests (see #6595).
See error in notebook: https://colab.research.google.com/drive/1vbKcVwApZEcWX_ryyx-2BJXdLD_nkFBG?usp=sharing |
Remove requirement of PyYAML!=5.4.x | [
"feature",
"help wanted",
"let's do it!"
] | π Feature
Remove dependency requirement of PyYAML!=5.4.x
Motivation
According to safety PyYAML versions below 5.4 have a security vulnerability which means that our pre-commit hooks don't allow upgrading to a newer versions of lightning.
-> pyyaml, installed 5.3.1, affected <5.4, id 39611
A vulnerability was discovere... |
Add ability to specify `artifact_location` when using `MLFlowLogger` | [
"feature",
"help wanted"
] | π Feature
Expose argument artifact_location of MlflowClient.create_experiment when it is called inside MLFlowLogger.experiment, this can be set with an argument when MLFlowLogger is instantiated along with tracking_uri or save_dir, to specify a custom location to save artifacts.
Motivation
When I use the save_dir argu... |
ImportError: cannot import name 'PY3' from 'torch._six' | [
"bug",
"help wanted"
] | I tried running the example from the section 'RAPID PROTOTYPING TEMPLATES' from the pytorch lightning documentation (https://pytorch-lightning.readthedocs.io/en/latest/starter/rapid_prototyping_templates.html) but gives the following error: "ImportError: cannot import name 'PY3' from 'torch._six' (/usr/local/lib/python... |
Use a pickle-alternative for serialization | [
"feature",
"help wanted",
"3rd party"
] | π Feature
Right now, if you use ddp_spawn or ddp-cpu, you are forced to make everything in your script pickleable. This is unfortunate, because there are many things that pickle cannot serialize correctly (e.g. lambda functions). One particular point of conflict is developing in a notebook -- if you write your pl.Ligh... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.