-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interpretability of Non-linear DR Methods #1
Comments
Hello,
It is applicable to non-linear DR techniques, as long as they can
dimensionally reduce new arbitrary instances without the need to retrain
them (retrain them on the whole dataset + the new instance).
t-SNE for example, does not offer this functionality, since it can only
dimensionally reduce data that has been trained on.
Στις Δευ 9 Μαΐ 2022 στις 8:24 μ.μ., ο/η Malikeh Ehghaghi <
***@***.***> έγραψε:
… Hi,
I wonder if your interpretability technique is applicable to non-linear DR
approaches such as t-SNE or UMAP.
—
Reply to this email directly, view it on GitHub
<#1>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AI3HTS5TTD73ZR5KQAXS6FLVJFC6DANCNFSM5VO6QXFQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Awesome! Thanks a lot for your quick response. In my case, what I intend to do is to apply this approach to UMAP embeddings. I want to find the main differentiating features between the clusters of points appear after DR. I have a combined dataset of multiple labels and I want to show what features or feature categories are accountable for clustering the points within distinct clusters after dimension reduction. Is it applicable in this case? Also, can you please name a couple of non-linear DR approaches which are generalizable to new instances? |
The most common non-linear DR approach that comes to my mind is Kernel PCA
with non-linear kernels ("poly", "rbf"...).
Generally, the DR technique should provide a transform method in addition
to the fit or fit_transform methods so you can just dimensionally reduce
new unseen instances without the need to retrain the model all over again.
I think UMAP provides a transform method so it should be applicable.
Στις Δευ 9 Μαΐ 2022 στις 9:44 μ.μ., ο/η Malikeh Ehghaghi <
***@***.***> έγραψε:
… Awesome! Thanks a lot for your quick response.
In my case, what I intend to do is to apply this approach to UMAP
embeddings. I want to find the main differentiating features between the
clusters of points appear after DR. I have a combined dataset of multiple
labels and I want to show what features or feature categories are
accountable for clustering the points within distinct clusters after
dimension reduction. Is it applicable in this case?
Also, can you please name a couple of non-linear DR approaches which are
generalizable to new instances?
—
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AI3HTS4PGMGECLGIB2B6AILVJFMH7ANCNFSM5VO6QXFQ>
.
You are receiving this because you commented.Message ID:
<avrambardas/Interpretable-Unsupervised-Learning/issues/1/1121449741@
github.com>
|
Great! Thank you so much for your response. |
Hi,
I wonder if your interpretability technique is applicable to non-linear DR approaches such as t-SNE or UMAP.
The text was updated successfully, but these errors were encountered: