Module lib.dataloader.LlamaDataLoader
Expand source code
import torch
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.dataloader import _collate_fn_t
class LlamaDataLoader(DataLoader):
def __init__(self, dataset: Dataset, collate_fn: _collate_fn_t, batch_size: int, seed: int, drop_last: bool=False) -> None:
gen = torch.Generator()
gen.manual_seed(seed)
super().__init__(
dataset=dataset,
batch_size=batch_size,
shuffle=True,
collate_fn=collate_fn,
generator=gen,
drop_last=drop_last,
)
Classes
class LlamaDataLoader (dataset: torch.utils.data.dataset.Dataset, collate_fn: Callable[[List[~T]], Any], batch_size: int, seed: int, drop_last: bool = False)-
Data loader combines a dataset and a sampler, and provides an iterable over the given dataset.
The :class:
~torch.utils.data.DataLoadersupports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.See :py:mod:
torch.utils.datadocumentation page for more details.Args
dataset:Dataset- dataset from which to load the data.
batch_size:int, optional- how many samples per batch to load
(default:
1). shuffle:bool, optional- set to
Trueto have the data reshuffled at every epoch (default:False). sampler:SamplerorIterable, optional- defines the strategy to draw
samples from the dataset. Can be any
Iterablewith__len__implemented. If specified, :attr:shufflemust not be specified. batch_sampler:SamplerorIterable, optional- like :attr:
sampler, but returns a batch of indices at a time. Mutually exclusive with :attr:batch_size, :attr:shuffle, :attr:sampler, and :attr:drop_last. num_workers:int, optional- how many subprocesses to use for data
loading.
0means that the data will be loaded in the main process. (default:0) collate_fn:Callable, optional- merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.
pin_memory:bool, optional- If
True, the data loader will copy Tensors into device/CUDA pinned memory before returning them. If your data elements are a custom type, or your :attr:collate_fnreturns a batch that is a custom type, see the example below. drop_last:bool, optional- set to
Trueto drop the last incomplete batch, if the dataset size is not divisible by the batch size. IfFalseand the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default:False) timeout:numeric, optional- if positive, the timeout value for collecting a batch
from workers. Should always be non-negative. (default:
0) worker_init_fn:Callable, optional- If not
None, this will be called on each worker subprocess with the worker id (an int in[0, num_workers - 1]) as input, after seeding and before data loading. (default:None) multiprocessing_context:strormultiprocessing.context.BaseContext, optional- If
None, the defaultmultiprocessing context_ of your operating system will be used. (default:None) generator:torch.Generator, optional- If not
None, this RNG will be used by RandomSampler to generate random indexes and multiprocessing to generatebase_seedfor workers. (default:None) prefetch_factor:int, optional, keyword-only arg- Number of batches loaded
in advance by each worker.
2means there will be a total of 2 * num_workers batches prefetched across all workers. (default value depends on the set value for num_workers. If value of num_workers=0 default isNone. Otherwise, if value ofnum_workers > 0default is2). persistent_workers:bool, optional- If
True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workersDatasetinstances alive. (default:False) pin_memory_device:str, optional- the device to :attr:
pin_memoryto ifpin_memoryisTrue.
Warning: If the
spawnstart method is used, :attr:worker_init_fncannot be an unpicklable object, e.g., a lambda function. See :ref:
multiprocessing-best-practiceson more details related to multiprocessing in PyTorch.Warning:
len(dataloader)heuristic is based on the length of the sampler used.When :attr:
datasetis an :class:~torch.utils.data.IterableDataset, it instead returns an estimate based onlen(dataset) / batch_size, with proper rounding depending on :attr:drop_last, regardless of multi-process loading configurations. This represents the best guess PyTorch can make because PyTorch trusts user :attr:datasetcode in correctly handling multi-process loading to avoid duplicate data.However, if sharding results in multiple workers having incomplete last batches, this estimate can still be inaccurate, because (1) an otherwise complete batch can be broken into multiple ones and (2) more than one batch worth of samples can be dropped when :attr:
drop_lastis set. Unfortunately, PyTorch can not detect such cases in general.See
Dataset Typesfor more details on these two types of datasets and how :class:~torch.utils.data.IterableDatasetinteracts withMulti-process data loading.Warning: See :ref:
reproducibility, and :ref:dataloader-workers-random-seed, and:ref:
data-loading-randomnessnotes for random seed related questions... _multiprocessing context: https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
Expand source code
class LlamaDataLoader(DataLoader): def __init__(self, dataset: Dataset, collate_fn: _collate_fn_t, batch_size: int, seed: int, drop_last: bool=False) -> None: gen = torch.Generator() gen.manual_seed(seed) super().__init__( dataset=dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn, generator=gen, drop_last=drop_last, )Ancestors
- torch.utils.data.dataloader.DataLoader
- typing.Generic