You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, it is not obvious how to apply a fitted PaCMAP model to a test dataset. Although a transform() call is available, it is not obvious how that is used, nor the applicable syntax, so some documentation on that would be appreciated.
The text was updated successfully, but these errors were encountered:
Currently the documentation website is under construction, but the docstrings are already available within the source code. I will copy paste the documentation for the transform() method here as a reference:
Projects a high dimensional dataset into existing embedding space and return the embedding.
Parameters
---------
X: numpy.ndarray
The new high-dimensional dataset that is being projected.
An embedding will get created based on parameters of the PaCMAP instance.
basis: numpy.ndarray
The original dataset that have already been applied during the `fit` or `fit_transform` process.
If `save_tree == False`, then the basis is required to reconstruct the ANNOY tree instance.
If `save_tree == True`, then it's unnecessary to provide the original dataset again.
init: str, optional
One of ['pca', 'random']. Initialization of the embedding, default='pca'.
If 'pca', then the low dimensional embedding is initialized to the PCA mapped dataset.
The PCA instance will be the same one that was applied to the original dataset during the `fit` or `fit_transform` process.
If 'random', then the low dimensional embedding is initialized with a Gaussian distribution.
save_pairs: bool, optional
Whether to save the pairs that are sampled from the dataset. Useful for reproducing results.
Currently, it is not obvious how to apply a fitted PaCMAP model to a test dataset. Although a transform() call is available, it is not obvious how that is used, nor the applicable syntax, so some documentation on that would be appreciated.
The text was updated successfully, but these errors were encountered: