Skip to content

Commit b7d7d32

Browse files
authored
Update interpretability.ipynb
1 parent e68df0f commit b7d7d32

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

use_cases/interpretability.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -909,7 +909,7 @@
909909
],
910910
"source": [
911911
"df_class = attributions_class.head(n_plot)\n",
912-
"ci = 1.96 * df[\"attribution_std\"] / np.sqrt(df[\"cells\"])\n",
912+
"ci = 1.96 * df_class[\"attribution_std\"] / np.sqrt(df_class[\"cells\"])\n",
913913
"fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5, 2), dpi=200)\n",
914914
"sns.barplot(ax=ax, data=df_class, x=\"gene\", y=\"attribution_mean\", hue=\"gene\", dodge=False)\n",
915915
"ax.set_yticks([])\n",
@@ -943,7 +943,7 @@
943943
"cell_type": "markdown",
944944
"metadata": {},
945945
"source": [
946-
"SHAP (SHapley Additive exPlanations) values are a popular interpretability technique based on cooperative game theory. The core idea is to fairly allocate the \"credit\" for a model's prediction to each feature, by considering all possible combinations of features and their impact on the prediction. SHAP values are additive, meaning the sum of the SHAP values for all features equals the difference between the model’s output and the average prediction. This method works for any model type, providing a consistent way to explain individual predictions, making it highly versatile and widely applicable. Deep SHAP is an extension of the SHAP method designed specifically for deep learning models, such as the ones in SCVI-Tools. For more information see [this](\"https://www.nature.com/articles/s41592-024-02511-3\")\n",
946+
"SHAP (SHapley Additive exPlanations) values are a popular interpretability technique based on cooperative game theory. The core idea is to fairly allocate the \"credit\" for a model's prediction to each feature, by considering all possible combinations of features and their impact on the prediction. SHAP values are additive, meaning the sum of the SHAP values for all features equals the difference between the model’s output and the average prediction. This method works for any model type, providing a consistent way to explain individual predictions, making it highly versatile and widely applicable. Deep SHAP is an extension of the SHAP method designed specifically for deep learning models, such as the ones in SCVI-Tools. For more information see [this](\https://www.nature.com/articles/s41592-024-02511-3\)\n",
947947
"\n",
948948
"Calcualtion of SHAP for SC data usually takes a lot of time. In SCVI-Tools we are running an approximation of SHAP in order to reduce runtime, where we use just 100 cells at each iteration."
949949
]

0 commit comments

Comments
 (0)