title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
test transfer to discussion thread
[ "question" ]
a quick test
Saving meta tags hparams file in Tensorboard Logger can occur multiple times
[ "bug", "help wanted", "logger" ]
πŸ› Bug The bug exists exactly here https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/loggers/tensorboard.py#L220. The if statement is checking using os.path rather than self._fs. As written, when using a non-local filesystem (such as S3, etc.), the os.path.isfile will always return fal...
Make EarlyStopping throw an error on unknown modes
[ "feature", "help wanted" ]
πŸš€ Feature Right now, passing in a nonsense mode to pl.callbacks.EarlyStopping passes silently >>> import pytorch_lightning as pl >>> pl.callbacks.EarlyStopping("val_f1", mode="werwer") In the code, it looks this is deliberate: in this case, we only raise a warning when the verbosity is >0 pytorch-light...
Fix Test Set doc to include datamodule
[ "docs" ]
In the test set documentation it still uses the old test_dataloader format and not the new datamodule argument: https://pytorch-lightning.readthedocs.io/en/stable/test_set.html
LightningModule .load_from_checkpoint requires specific model signature
[]
I have a LightningModule similar to this: class MyLightningModule(pl.LightningModule): def __init__(self, model: nn.Module, hparams: argparse.Namespace): init code here @staticmethod def add_task_specific_args(parent_parser: Optional[Union[argparse.ArgumentParser,list]] = None): argument...
What sync_dist_op can be set in self.log()?
[ "question" ]
Hi, the document shows sync_dist_op= "mean" by default. But in my case, I do training in ddp with 2 GPUs and I want to record the number of true positive samples in validation dataset. So sync_dist_op should be set like "sum" which I didn't see in the source code and the document. Another thing is that I want to record...
Replace custom samplers with distributed versions when replace_sampler_ddp=True
[ "feature", "help wanted", "won't fix", "docs" ]
πŸš€ Feature Replace custom samplers with distributed versions when replace_sampler_ddp=True. Motivation One of the issues I've been having is using custom samplers with distributed training. Prior to Lightning, I was using the DistributedSamplerWrapper class from Catalyst (https://catalyst-team.github.io/catalyst/_modul...
Secondary Pytorch LightningModule-declared Tensors not migrating to correct device when called from other LightningModule.
[ "help wanted", "question", "working as intended" ]
πŸ› Bug Background After working on a Seq2Seq model with attention using only LightingModules, I was still getting on_device errors. This also occurred with PackedSequence objects but I think that is due to a PyTorch cpu-specific implementation, not Lightning. This also occurred with functions returning tensors called ...
Loading checkpoint creates new logging directory and tries to load checkpoint from that directory
[ "bug", "help wanted" ]
πŸ› Bug I'm doing training and then loading the checkpoint later to do some experiments. I'm using hydra. What's happening is the checkpoints are being created as expected. However, after I load the checkpoint with load_from_checkpoint a new output directory is created, and the checkpoint attempting to be read from ...
Clean printing to terminal alongside progress bar
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature A way to print to terminal without breaking up the progress bar. Motivation A lot of people print stuff to terminal while training/validating/testing, and currently a simple call to print() will break the progress bar. A way to get around this is to set up a custom progress bar, with methods for calling tqdm...
the speed of each iteration halves with two GPUs
[ "bug", "help wanted", "waiting on author", "priority: 1" ]
πŸ› Bug I am running this github repository https://github.com/mindslab-ai/faceshifter. In the first experiment, I used one GPU and the speed was about 1.34it/s (there was 994470 iterations, using DDP). In the next experiment, I used two GPUs and this time the speed almost halved to 1.38s/it (this time there was 497235...
How to use sparse.mm in float16 training pipeline
[ "question" ]
What is your question? How can we assign certain operation (e.g. torch.sparse.mm) as float32 operation in float16 training setting? Details and what I have tried I am trying to train a model using pl.Trainer(distributed_backend='ddp', precision=16, amp_level='01', gpus=2) and I need to use sparse tensor multiplication...
Error: forward() missing positional argument
[ "question" ]
I'm trying to implement a model with multiple input images, something similar to this: https://rosenfelder.ai/multi-input-neural-network-pytorch/ Here is my model: class FaceModel(pl.LightningModule): def __init__(self): super().__init__() self.example_input_array = torch.rand(1, 3, 64, 64) ...
Update PipeRPCPlugin compatibility to pytorch 1.7.0+
[ "feature", "help wanted" ]
πŸš€ Feature Latest pytorch-lightning supports Sequential Model Parallelism, but compatible only with torch==1.6.0. Supporting 1.7.0+ allows us to exploit latest GPUs such as RTX 3000 series. Motivation Fine-tuning models too large for a single GPU. Pitch PipeRPCPlugin to support torch==1.7.0 or higher. Alternative...
Named formatting options in filename argument is broken when it contains "/"
[ "feature", "help wanted", "good first issue", "won't fix", "logger", "checkpointing", "priority: 1" ]
πŸ› Bug / Limitation I think this might be a limitation instead of a bug. Named formatting options in filename argument is broken when it contains "/". "/" is used because I want to group different metrics. From pytorch documentation Lots of information can be logged for one experiment. To avoid cluttering the UI and ...
TestTubeLogger fails to log tensorboard hparams
[ "bug", "help wanted" ]
πŸ› Bug I recall TestTubeLogger used to give me an HParams tab in tensorboard. Possibly since the changes described #2974 #3610 this is no longer the case, and the "fix" seems to entail discarding TestTubeLogger and using TensorboardLogger instead, but I am unable to do such a thing due to heavy code dependence on TestT...
Logging with gradient accumulation & DDP
[ "question" ]
I have got two questions about logging if using gradient accumulation and DDP: Is there a way to average logged values across the accumulated batches? Logging in training step simply as follows: def validation_step(self, batch, batch_idx): ... self.log('val_loss', outputs.loss, sync_dist=True) return loss...
Sequential model parallel with multi nodes regarding --num_nodes
[ "bug", "help wanted", "priority: 1" ]
Sequential model parallel with multi nodes regarding --num_nodes Please reproduce using the BoringModel 8 MP with 2 DP as follow. # node 0 MASTER_ADDR=XXX.XXX.XXX.1 MASTER_PORT=7874 NODE_RANK=0 python train.py --gpus 8 --accelerator ddp .... --use_ddp_sequential --num_nodes 2 # node 1 MASTER_ADDR=XXX.XXX.XXX.1 MAST...
Plotting metrics with Tensorboard plots two graphs instead on one. What is the second one?
[ "question", "logger", "3rd party" ]
Before asking: I tried looking for an answer and asking on StackOverflow, but no luck. When plotting to Tensorboard with i.e self.log('validation_accuracy_epoch', self.val_acc.compute()), the output is two lines - a dark and a light one. Only the dark one allows any interaction. What is the other line? Code full cod...
Hanging process on DDP and ModelCheckpoint Callback if one of the dirpath is None
[ "bug", "help wanted" ]
πŸ› Bug When running on DDP and using the ModelCheckpoint Callback, if the callback is given dirpath=None in one of the processes, the trainer hangs before sanity checks start and becomes unresponsive. Please reproduce using the BoringModel Cannot reproduce on Colab due to needing 2 GPUs to reproduce To Reproduce # Copy...
TypeError: optimizer_step() got an unexpected keyword argument 'on_tpu'
[ "question", "won't fix", "waiting on author" ]
❓ Questions and Help I have got this error /usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule) 468 self.call_hook('on_fit_start') 469 --> 470 results = self.accelerator_backend.train() 471 se...
Logging momentum in LearningRateMonitor
[ "bug", "help wanted" ]
πŸ› Bug Trying to log momentum in LearningRateMonitor crashes training. When log_momentum = False, everything runs smoothly. Please reproduce using the BoringModel https://colab.research.google.com/drive/1JiSrB4JYexxMAjPPdnbv4Q0ShqRH2YJQ?usp=sharing Environment CUDA: GPU: Tesla T4 available: True version: ...
Unexpected global_steps/ accelerator behaviour
[ "bug", "help wanted" ]
πŸ› Bug I observed some, imo, unexpected behaviour when experimenting with different combinations of gpus/ num_processes and accelerator trainer args. To Reproduce I use the following bug_report_model.py. I adapted the one from pl to take accelerator, gpus, and num_processes args, only use fake train_data, and train fo...
Loading PL model from checkpoint with only weights
[ "question" ]
Hello, How to properly load the PL model when ModelCheckpoint has save_weights_only=True ?
Error with `load_from_checkpoint`
[ "bug", "working as intended", "checkpointing", "priority: 1" ]
I am trying to fine-tune a language model and facing some issues with loading the model from the saved checkpoint. I have defined my own model which takes in argparse.NameSpace as the input as is suggested in the documentation here. class FineTuner(pl.LightningModule): def __init__(self, hparams): # pass all the a...
Add docs to all Trainer attributes
[ "docs" ]
Find all attributes under: dir(self.trainer)
Use scheduler.get_last_lr() instead of manually searching for optimizers.param_groups
[ "feature", "help wanted" ]
πŸ› Bug / Feature Request I am not sure whether to classify this issue as a feature request or bug, but in your codebase, you get the learning rate from schedulers as scheduler.optimizer.param_groups[0]['lr']. However, PyTorch provides the get_last_lr() method for this. I created my own scheduler, which allows me to com...
Refactor: hpc_load and entangled logics in CheckpointConnector
[ "feature", "help wanted", "refactor", "checkpointing" ]
πŸš€ Feature Refactor CheckpointConnector by reducing duplicated parts and separating different functionality. Motivation CheckpointConnector can be refactored. hpc_load Almost all hpc_load code is duplicated with restore (normal restore). Entangled logics top-level method restore_weights have several functionality/respo...
LSTM training with hiddens in training_step()
[ "question", "waiting on author" ]
I am trying an LSTM network where I require in each training step the hidden value to be passed over to the next training step until the end of epoch. At the start of new epoch we reset by initializing the hidden to zeros. Following is how i do this, def training_step(self, batch, batch_nb, hiddens): ... ...
trainer.tune() fails when Trainer.__init__(auto_lr_find=True, auto_scale_batch_size=True)
[ "bug", "help wanted", "trainer: tune", "priority: 1" ]
πŸ› Bug trainer.tune() works just fine when either Trainer.__init__(auto_lr_find=False, auto_scale_batch_size=True) or Trainer.__init__(auto_lr_find=True, auto_scale_batch_size=False) However, trainer.tune() fails when Trainer.__init__(auto_lr_find=True, auto_scale_batch_size=True) LR finder stopped early due to divergi...
How to implement a custom Metric based on sklearn's functions
[ "question" ]
❓ Questions and Help What is your question? How to implement a custom Metric based on sklearn's functions? Code I tried to implement a ROCAUC Metric with help of sklearn's roc_auc_score, because it supports multilabel classification. from pytorch_lightning.metrics import Metric from sklearn.metrics import roc_auc_score...
pl_examples reinforce_learn_qnet wrong argument name and description in argparse
[ "bug", "help wanted" ]
πŸ› Bug In refinforce_learn_qnet.py, the argument parser adds two arguments that the model doesn't take input to: max_episode_reward and warm_start_size. It adds both warm_start_size and warm_start_steps arguments instead of just warm_start_steps. Can be seen here. warm_start_steps has a wrong description as well (seems...
Value interpolation with hydra composition
[ "bug", "help wanted", "priority: 1" ]
I am using hydra composition with the following structure: β”œβ”€β”€ configs β”‚Β Β  β”œβ”€β”€ config.yaml β”‚Β Β  β”œβ”€β”€ data β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dataset_01.yaml β”‚Β Β  β”‚Β Β  └── dataset_02.yaml β”‚Β Β  └── model β”‚Β Β  β”œβ”€β”€ bert.yaml β”‚Β Β  └── gpt.yaml config.yaml defaults: - model: bert - data: dataset_01 ... data/dat...
adding proximal policy optimization template to pl_examples
[ "feature", "help wanted" ]
πŸš€ Feature An implementation of proximal policy optimization (PPO) to pl_examples. Motivation pl_examples features one reinforcement learning algorithm- DQN. It could be good to have a template of a new policy gradient method. I had implemented PPO in Lightning and it was mentioned on Slack that this could be useful to...
pre-commit isort hook failure.
[ "duplicate", "feature" ]
When running pre-commit run isort --all-files, I get the following: isort....................................................................Failed files were modified by this hook Fixing /home/arnaud/dev/pytorch-lightning/tests/base/models.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/callbacks/mod...
W&B logger not working as expected with accumulate_grad_batches>1
[ "bug", "help wanted", "priority: 0", "logger" ]
πŸ› Bug When logging inside training step to wandb logger and using accumulate_grad_batches > 1 the behavior is not as expected. Similar issue as in #4304 for Tensorboard (which was closed and the fix was merged in #4738). First half with accumulate_grad_batches == 1, second with accumulate_grad_batches == 8: Moreover,...
LightningModule models using `setup` don't load checkpoints properly.
[ "bug", "discussion", "design", "checkpointing" ]
πŸ› Bug Using setup methods within a LightningModule does not allow proper checkpoint loading using the load_from_checkpoint method. Furthermore, even if setup and torch.load are used to manually load a checkpoint, the trainer seems to always invoke setup with fit is called, thereby overwriting the loaded parameters. Th...
allow val_check_interval to be larger than training dataset size
[ "feature", "help wanted", "priority: 1", "priority: 2" ]
πŸš€ Feature allow val_check_interval to be larger than the number of the training batches in one epoch Motivation I am using a small datasets, so instead of specifying max_epochs in Trainer, I want to use max_steps and evaluate every val_check_interval , but when val_check_interval is larger than number of batches in ...
Remove mypy from pre-commit hooks
[ "feature", "won't fix" ]
As of now, pre-commit hooks are not used by developers. I see 2 reasons for dropping mypy hooks at this point: mypy command line use setup.cfg as configuration file. It is fine when used alone to get configuration and list of files. But when running with pre-commit, pre-commit add at files in the command line, which o...
Document exceptions
[ "good first issue", "docs" ]
πŸ“š Documentation This is only a suggestion, but I find it useful to have what exceptions functions/classes can raise in the docs, so I'd like to suggest adding the Raises: section to public functions/classes. Progress dirname $(grep -rl "raise " pytorch_lightning)|sort|uniq|awk '{print "- [ ] "$1}' pytorch_lightning/...
`on_test_end` is not called in test
[ "bug", "help wanted", "design", "priority: 1" ]
πŸ› Bug I'm running a model with fit and test called. However I noticed that on_test_end is not called at the end of test. Please reproduce using the BoringModel https://colab.research.google.com/drive/1j5J8TXAIqoFCqc-3WnAPMox8amNsRDjE?usp=sharing To Reproduce Use following BoringModel and post here See the code additio...
Remove unused import in `accelerators`
[ "feature", "help wanted", "refactor" ]
πŸš€ Feature pytorch_lightning/acclerators/*.py There are many unused import in pytorch_lightning/acclerators. These should be removed. Motivation Pitch Alternatives Additional context
Validation step is ignored when using DataModule
[ "question" ]
What is your question? Hi, guys! I created my own DataModule and loading it to the trainer. However, it appears that the "fit" is skipping the validation step. How can I ensure that the code runs through the validation step too? Code class DataModule(pl.LightningDataModule): def __init__(self, batch_size=25, seed=0...
[BUG] Logging in a callback does not work with multiple optimizers
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug #5063 is not solved when logging in callbacks (@tchaton ). Specifically, self.log returns the following File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 240, in auto_reduce_results_on_epoch_end opt_outputs = epoch_metrics[opt_idx]...
Mismatching stats logged while training and while evaluating with the saved model
[ "question" ]
❓ Questions and Help Before asking: Try to find answers to your questions in the Lightning Forum! Search for similar issues. Search the docs. What is your question? I manually print the training accuracy after every epoch. But get different accuracy when evaluate the training set using the saved model at that epoch....
TPU and multo-GPU for RL
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature RL on TPUs and multi-GPUs Motivation Run "seed rl" and other asynchronous and distributed learning in/for RL
model_to_device() missing 1 required positional argument 'process_idx'
[ "bug", "help wanted", "environment: slurm" ]
πŸ› Bug When running the code for ddp_cpu on SLURM based cluster, I get this error: Traceback (most recent call last): File "image_classifier.py", line 99, in <module> cli_main() File "image_classifier.py", line 87, in cli_main trainer.fit(model, datamodule=dm) File "/pylon5/cis200022p/balu/softwares/pytorch/lib/python...
How to gather results during inference in ddp
[ "question" ]
❓ Questions and Help Hi, I am using multiple gpus and ddp mode for model inference. I am wondering how to gather the results from all distributed processes and save them into one file in the test_epoch_end. My code looks like this: Code class PLModel(pl.LightningModule): def on_test_epoch_start(self): self....
Automatic calculation of layer sizes
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Calculate number of layer inputs (especially linear layers) automatically, without having to resort to manual calculation or print Motivation I have a model that receives multiple images as inputs. Each input goes through some number of convolution/pool layers, each with potentially different kernel sizes/st...
Add a `Trainer.predict()`, similar to `Trainer.test()`, except returns the predictions
[ "feature", "priority: 0", "design" ]
πŸš€ Feature Add a Trainer.predict(), similar to Trainer.test(), that returns the predictions Motivation I want to get prediction, without writing for batch in ..., to(cuda), output.append() ... Pitch Making prediction should be part of a LightningModule "system". And it should be easy to add this functionality. Alternat...
Add plugin trainer flag- Trainer(plugin="ddp_sharded")
[ "feature", "help wanted" ]
@edenlightning commented on Wed Nov 18 2020 As a user, I would like to set the plugin using a string flag. Currently, you need to pass in the plugin explicitly. It would be nicer if we can make plugin the configurable, so that the user has to only pass a string to enable a plugin. @edenlightning commented on Thu Nov 1...
Model- and data-specific unit tests
[ "feature", "ci", "discussion" ]
πŸš€ Feature Trainer option to run callbacks for model- and data-specific unit testing Motivation The docs mention a fast_dev_run option for initializing a trainer. Correct me if I'm wrong, but my understanding is this simply hits every line of code in the train/val loops to ensure there won't be any crashes on a full-sc...
Continuing training when using learning rate schedulers
[ "question" ]
❓ Questions and Help When restarting training on a model using learning rate scheduler, it seems like the original learning rate is used rather than the scheduler-update learning rate. Code For example, a model with the following configure_optimizers: def configure_optimizers(self): optimizer = optim.Adam(self....
Wrong exception message
[ "bug", "good first issue", "refactor" ]
pytorch-lightning/pytorch_lightning/accelerators/accelerator_connector.py Lines 344 to 347 in 1f6236a raise MisconfigurationException( 'DataParallel does not support num_nodes > 1. Switching to Di...
LightningModule docs is wrong when defining "Training with DataParallel"
[ "won't fix", "docs" ]
The docs says when training with DataParallel the training_step_end function should be like this: def training_step_end(self, batch_parts): gpu_0_prediction = batch_parts[0]['pred'] gpu_1_prediction = batch_parts[1]['pred'] # do something with both outputs return (batch_parts[0]['loss'] + batch_parts[1]...
Empty "outputs" argument in the on_train_batch_end() method of Callback
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug The "outputs" argument of the 'on_train_batch_end' method of a lightning Callback seems to be empty, unless training_epoch_end() is implemented in the lightning model. I'm looking for a way to process the outputs of training_step() in a callback. If I'm not mistaken, the "outputs" argument of the on_train_batch_...
on_after_backward docs for logging histograms
[ "question" ]
What is your question? I was following the example on_after_backward code for logging parameter variables for tensorboard histograms. https://pytorch-lightning.readthedocs.io/en/latest/_modules/pytorch_lightning/core/hooks.html#ModelHooks.on_after_backward Code # example to inspect gradient information in tensorboard i...
Illegal instruction (core dumped) when running MNIST Hello World
[ "bug", "help wanted", "won't fix", "priority: 2" ]
πŸ› Bug This may be similar to issue #5488, with the following differences: I'm not using a GPU the error message is Illegal instruction (core dumped) rather than Segmentation fault (core dumped). I copy pasted the code here, modifying the trainer declaration so as not to use a GPU: import os import torch from torch ...
Remove assert in production code
[]
assert are removed with compiling to optimized byte code (python -o producing *.pyo files). This caused various protections to be removed. assert isinstance -> raise TypeError #5536 assert *.rank == 0 -> assert len(...) == len(...) ->
fast_dev_run breaks with val_check_interval
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug It looks like val_check_interval must be <= num_training_batches, but the latter is set to 1 in fast_dev_run, so things break. Please reproduce using the BoringModel https://colab.research.google.com/drive/1cMWbhk5pmkGe_y6znQFHebRUrqa_ssu9?usp=sharing To Reproduce Use following BoringModel and post here Expecte...
Why are losses different when logging from '_step' (with on_epoch=True) compared to logging from '_epoch_end'?
[ "bug", "help wanted", "3rd party", "priority: 1", "logging" ]
πŸ› Bug When logging losses from {prefix}_step with self.log("{prefix}_loss", loss, on_step=False, on_epoch=True), they are different from losses logged in {prefix}_epoch_end, using avg_loss = torch.stack([x["loss"] for x in outputs]).mean() self.log( name="loss/" + prefix, value=avg_loss, prog_bar=Fal...
Load callback states while testing.
[ "feature", "help wanted", "checkpointing", "priority: 1", "trainer: validate", "trainer: test" ]
πŸš€ Feature Load callback states while testing. Motivation #5161 (comment) Pitch Two possible API changes: with an additional argument restore_states: test(ckpt_path, restore_states=True/False) # give an option whether to load states or not test(model, ckpt_path, restore_states=True/False) # same as above but will jus...
Failing to load the GPU-trained model onto CPU-machine
[ "question", "checkpointing" ]
What is your question? Hi, I cannot load neither model nor checkpoint from GPU onto CPU machine. I followed the docs but the problem still remains. How can I resolve this issue? Please follow the code sections below and its corresponding error messages. Code m_ae_model = LitAE.load_from_checkpoint(r'Path\version_1_load...
GAN Manual optimization not working after 1.0.7
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug I have a GAN model and as of 1.0.7 it no longer trains correctly. I've tried to troubleshoot the issue to the best of my ability but I have no idea what's causing the problem. Please reproduce using the BoringModel https://colab.research.google.com/gist/import-antigravity/0730243bb11b56031110fd6aa7d58971/the-bo...
How to save/load hyperparameters of system componenets?
[ "question", "waiting on author" ]
How should I handle hyperparameters of a submodule? For example in the snippet below, say MyGenerator takes it's hparams in constructor, how should I handle that? class LitMNIST(LightningModule): def __init__(self, loss_fx, generator_network, layer_1_dim=128 **kwargs): super().__init__() self.layer...
validation_epoch_end or on_train_epoch_end receive extra arguments/data
[ "question" ]
❓ Questions and Help What is your question? Basically I need to validate my model after each epoch, but the labels information is a little tricky(mixture of list of tuple of int and string), that it could not be included in DataLoader. however, validation_epoch_end, validation_step, etc.. only receives DataLoader out...
Refactoring GIF pl.TrainResult is (deprecated) Gif needs to be updated.
[ "duplicate", "docs" ]
πŸ“š Documentation The example gif showing refactoring needs to be updated as the reference to pl.TrainResult has been deprecated. Thanks!
GPU memory leak in For Loop with AMP mode
[ "bug", "priority: 2" ]
πŸ› Bug Computation in For loop with AMP ON cause GPU memory leak, and crash training. To Reproduce Notebook. Core part: def generate(self, device: str) -> None: ipt = torch.tensor([[float(j) for j in range(0, self.size_i)]], device=device) for _ in range(0, 10000): self.fc(ipt) Run .gene...
Need check_val_every_n_steps in Trainer
[ "feature", "help wanted" ]
πŸš€ Feature Add an argument check_val_every_n_steps in Trainer.__init__ function to check metrics of validation set for certain steps. Motivation For many tasks, large models are trained in steps not complete epochs, especially pretrained models in CV and NLP. As a consequence, step-based arguments like max_steps, log_e...
Training using DDP and SLURM
[ "question", "distributed", "environment: slurm" ]
❓ Questions and Help What is your question? The current scenario is two nodes with different free GPUs. For instance, node1 has 5 free gpus and node2 has 3 free gpus. I can requested the 8 free gpus using slurm without care the number of nodes. Is there any way that I can use PL for using the 8 available gpus in this c...
args error when running pl_examples semantic_segmentation through command line
[ "bug", "help wanted" ]
πŸ› Bug Running python pl_examples\domain_templates\semantic_segmentation.py leads to argument options conflicting error. The issue is the same as the one with #5382 This error seems to be because add_help=True in argparse.ArgumentParser in both the parent_parser and in the parser for adding model specific args. Settin...
"has conflicts" label removal
[ "bug", "ci" ]
The "has conflicts" label is not removed automatically once conflicts are fixed cc: @Borda
Accelerator examples cannot run
[ "help wanted", "good first issue", "docs" ]
πŸ“š Documentation In the accelerator documentation, the documentation tells you to use a custom accelerator like: trainer = Trainer(accelerator=DDPAccelerator()) Unfortunately, this is not actually possible -- all of the accelerators (AFAICT) require a trainer argument, and actually trying to execute this code raises: T...
Add a warning for returning none for from training_step using multi-GPU
[ "docs" ]
Mechanism to skip certain hooks
[ "feature", "discussion", "design", "hooks" ]
πŸš€ Feature Do we need a way to prevent certain hooks to be executed? I'm not entirely sure how solid this idea is, so I'm hoping for some discussion :) Motivation A user encountered a use case in which they wanted to build the model in the setup hook. However, because the setup hook is exectued everytime regardless whe...
tensorboard displays incorrect Learning-rates
[ "bug", "help wanted" ]
πŸ› Bug For example, if the learning-rate is euler's e-6 (i.e. 0.00247875217), then it is displayed as 2.4788e-3 on the vertical axis of tensorboard's graph. The correct value should be 2.4788 * (10 raised to the power of -3). Another example, if the learning-rate is euler's e-9 (i.e. 0.0001234098), then it is displaye...
outputs of training_epoch_end for different configure_optimizers conditions
[ "question" ]
Condition One: when I write optimizer as follows: def configure_optimizers(self): return [disOptim,genOptim],[] I can simply write the training_epoch_end as follows: def training_epoch_end(self,outputs): sum_loss_D_real=torch.stack([x['D_loss_real'] for x in outputs[0]]).sum() Condition Two: However when I w...
Training is interrupted without error with MulitGPU
[ "bug", "help wanted", "priority: 0", "waiting on author", "distributed" ]
πŸ› Bug The training is interrupted randomly in the middle of an epoch without errors. The console only says: Terminated. The error does not necessarily occur, if it does then mostly between epochs 2-4. It is noticeable that processes are still running after the termination, the graphic cards are still used by python p...
Should the LightningModule contain a 'from_argparse_args' attribute as does the LightningDataModule?
[]
❓ Questions and Help Hello everyone, love the project so far. Searching for a workaround for the following issue. Any help would be greatly appreciated. What is your question? Should the LightningModule contain a 'from_argparse_args' attribute as does the LightningDataModule? Code Lightning module code class UNet(pl.Li...
Code stuck after running 1 epoch on TPU
[ "question", "accelerator: tpu" ]
❓ Questions and Help What is your question? I'm trying to run the LitAutoEncoder on TPUs, but the code runs for 1 epoch and gets stuck there. Code class LitAutoEncoder(pl.LightningModule): def __init__(self, hparams): super().__init__() self.hparams = hparams self.encoder = nn.Sequential( ...
Failing to log to Neptune.ai when resuming from checkpoint
[ "bug", "duplicate", "help wanted" ]
πŸ› Bug I'm trying to resume training from a checkpoint, and I'm trying to resume logging using the Neptune.ai logger, but it throws this particular error: neptune.api_exceptions.ChannelsValuesSendBatchError: Received batch errors sending channels' values to experiment CAS-68. Cause: Error(code=400, message='X-coordina...
MetricList for moving entire list of metrics onto GPU
[ "feature", "help wanted" ]
πŸš€ MetricList MetricList serves the same function for Metrics in PL as nn.ModuleList in PyTorch for Modules. As such, they take care of flexibly moving all Metrics and their states onto GPU or CPU. Motivation As a dynamic inference researcher, I want to be able to test multiple setups for a single model (f.e. executin...
activate lr_scheduler after epoch 10
[ "question" ]
Is there any way to activate lr_scheduler after epoch 10? { 'scheduler': lr_scheduler, # The LR scheduler instance (required) 'interval': 'epoch', # The unit of the scheduler's step size 'frequency': 1, # The frequency of the scheduler 'reduce_on_plateau': False, # For ReduceLROnPlateau scheduler 'monitor': 'val_loss'...
build-conda CI failed
[ "bug", "help wanted", "won't fix", "ci", "priority: 1" ]
πŸ› Bug build-conda CI failed almost every time. To Reproduce Run CI. Example1: current master HEAD Example2: current release/1.2-dev HEAD Expected behavior Build successfully. Environment Both master release/1.2-dev Additional context It is long-lasting error which prevent efficient CI usage (now we routinely ignor...
Understanding accumulate_grad_batches parameter?
[ "question" ]
I am very new to PL. As far as I understand accumulate_grad_batches works similar to 'gradient_accumulation_steps' , where the main task is to increase the effective batch size. But I do not see any change in training epoch step count when increasing the accumulate_grad_batches parameters. Let's say, I have a datase...
Apex with multiple optimizers error "element 0 of tensors does not require grad and does not have grad_fn"
[ "bug", "help wanted", "3rd party" ]
πŸ› Bug File "repro apex.py", line 51, in <module> trainer.fit(model) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 481, in fit results = self.accelerator_backend.train() File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/accelerators/gpu_...
DDPShardedPlugin consolidate_state_dict RuntimeError
[ "bug", "won't fix", "waiting on author", "distributed", "3rd party" ]
πŸ› Bug After an (seemingly arbitrary) number of steps/epochs, DDPShardedPlugin::optimizer_state crashes on its consolidate_state_dict call: Pytorch's distributed broadcast_object_list tries object_tensor = torch.ByteTensor(torch.sum(object_sizes_tensor).item()) RuntimeError: Trying to create tensor with negative dimen...
Behaviour of accumulate_gradients and multi-gpu
[ "question" ]
Training setup: 2 GPUs on a single machine running in DDP mode. If I use a batch size of 16 and accumulate gradients=2, how does lightning handle this? Possibility 1: GPU1 processes one batch of size 16. GPU2 processes one batch of size 16. average gradients from GPU1 and GPU2 and apply weight update. or Possibility ...
Regression between Lightning 1.1.3 and 1.1.5
[ "bug", "help wanted", "distributed" ]
πŸ› Bug Posted originally by @okuchaiev: Has anyone observed a model performance degradation when switching from 1.1.3 to 1.1.4 and 1.1.5? On the plot below you can see exactly the same model/hyperparams trained using 1.1.3 (runs named enes3) and 1.1.5 (runs named enes5). You can see that 1.1.3 outperforms 1.1.5 consist...
Update metrics to use Enum
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature Motivation Update metrics package to use Enum where it makes sense. For example: pytorch-lightning/pytorch_lightning/metrics/classification/helpers.py Lines 79 to 87 in f782230 # Get the case ...
Pass all running stages to DataModule.setup
[ "feature", "help wanted", "refactor" ]
πŸš€ Feature Currently. DataModule.setup is only called with stages fit or test. But we have several more: Stages: pytorch-lightning/pytorch_lightning/trainer/states.py Lines 39 to 49 in 5f33728 class RunningStage(LightningEnum):...
ModuleNotFoundError: __path__ attribute not found on 'hydra' while trying to find 'hydra.experimental'
[ "bug", "help wanted", "won't fix", "3rd party" ]
πŸ› Bug AttributeError: module 'hydra' has no attribute 'path' PyTorch Version (e.g., 1.0): 1.7.1 OS (e.g., Linux): linux How you installed PyTorch (conda, pip, source): conda Python version: 3.8 CUDA/cuDNN version: 11.0/8.0 Additional context import pytorch_lightning as pl --------------------------------------------...
Specify Gradient Clipping Norm in Trainer
[ "feature", "help wanted", "won't fix", "design", "priority: 1" ]
πŸš€ Feature Allow specification of the gradient clipping norm_type, which by default is euclidean and fixed. Motivation We are using pytorch lightning to increase training performance in the standalone Federated Learning context (experimental setting). In this context the trained models diverge from their underlying dat...
multiple processes running all tasks after trainer.fit(accelerator="ddp")
[ "question", "distributed" ]
❓ Questions and Help What is your question? When training with ddp the script calls multiple python scripts to run the training. This causes an issue when I use the same python script to do other stuff after Im done with training. What is best practice here? My only solution so far would be to condition on os.environ["...
Request for additional documentation on learning rate scheduling on `step` instead of `epoch`.
[ "docs" ]
πŸ“š Documentation The current documentation states that returning {'interval': 'step'} in configure_optimizers will alter the learning rate scheduler update interval to step-wise update instead of epoch-wise update. However, as mentioned in #4929, this is not true. I would like to request an update to the documentation,...
Loss divided by `accumulate_grad_batches` number
[ "bug", "help wanted", "priority: 0", "waiting on author", "logging" ]
πŸ› Bug After the 1.1.4 with the fix 5417, logging was fixed but my loss was divided by accumulate_grad_batches. Please reproduce using the BoringModel Sorry, there is no BoringModel. I paste my code here To Reproduce def training_step(self, batch, batch_idx, optimizer_idx): with autocast(): outs...
Add a documentation page for "manual vs auto optimization"
[ "won't fix", "docs" ]
Help for adversarial learning with pytorch lighting
[ "question" ]
Help for adversarial learning with pytorch lighting What is your question? Code the old method for adversarial learning is like this: fgm = FGM(model) for batch_input, batch_label in data: # normal training loss = model(batch_input, batch_label) loss.backward() # adversarial training fgm.attack() ...
TensorBoardLogger doesn't close SummaryWriter on finalize
[ "bug", "help wanted" ]
πŸ› Bug The file handle managed by the SummaryWriter under the attribute _experiment in the TensorBoardLogger is never closed by any cleanup routines. This leaves a dangling file handle that restricts access to the output tfevent files until the parent script exists or the Jupyter kernel is restarted. Please reproduce u...
Why do some metrics require `num_classes=1` for binary classification?
[ "question" ]
❓ Why do some metrics require num_classes=1 for binary classification? What is your question? Why do some metrics require the argument num_classes=1 for binary classification (and some don't) to give the correct results? I find it rather unintuitively to calculate Recall/Precision/F1 with the argument num_classes=1 for...