Skip to content

Commit b70f6f3

Browse files
authored
Merge pull request #8 from MedVIC-Lab/testing
Tushar additions + basic publication sorting
2 parents 2785930 + 9ef1f2e commit b70f6f3

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+178
-366
lines changed

.gitignore

+9-4
Original file line numberDiff line numberDiff line change
@@ -59,9 +59,6 @@ typings/
5959
.env
6060
.env.test
6161

62-
# Nuxt build directory
63-
.nuxt
64-
6562
# Nuxt generate directory
6663
dist
6764

@@ -78,4 +75,12 @@ static
7875
*.ntvs*
7976
*.njsproj
8077
*.sln
81-
*.sw?
78+
*.sw?
79+
80+
# publication, projects, and members JSON
81+
/public/assets/publications.json
82+
/public/assets/projects.json
83+
/public/assets/members.json
84+
85+
# OS generated files
86+
.DS_Store

.vitepress/config.mts

+2-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,8 @@ export default defineConfig({
1313
{ text: 'Home', link: '/' },
1414
{ text: 'About', link: '/pages/about' },
1515
{ text: 'People', link: '/pages/people' },
16-
{ text: 'Projects', link: '/pages/projects' },
16+
// Uncommnet when projects are added
17+
// { text: 'Projects', link: '/pages/projects' },
1718
{ text: 'Publications', link: '/pages/publications' },
1819
],
1920
},

.vitepress/theme/custom.css

-60
Original file line numberDiff line numberDiff line change
@@ -1,60 +0,0 @@
1-
a {
2-
color: #007bff!important;
3-
}
4-
5-
a:hover {
6-
text-decoration: underline!important;
7-
}
8-
9-
a:visited {
10-
color: #18508c!important;
11-
}
12-
13-
h1 {
14-
font-size: 2.5em;
15-
margin-bottom: 20px;
16-
color: #333;
17-
}
18-
19-
h2 {
20-
font-size: 1.5em;
21-
margin-bottom: 10px;
22-
color: #333;
23-
}
24-
25-
h3 {
26-
font-size: 1.25em;
27-
margin-bottom: 10px;
28-
color: #333;
29-
}
30-
31-
h4 {
32-
font-size: 1em;
33-
margin-bottom: 10px;
34-
color: #333;
35-
}
36-
37-
h5 {
38-
font-size: 0.875em;
39-
margin-bottom: 10px;
40-
color: #333;
41-
}
42-
43-
h6 {
44-
font-size: 0.75em;
45-
margin-bottom: 10px;
46-
color: #333;
47-
}
48-
49-
p {
50-
margin-bottom: 20px;
51-
color: #555;
52-
}
53-
54-
code {
55-
background-color: #f5f5f5; /* Light gray background */
56-
padding: 2px 4px; /* Padding around the code */
57-
border-radius: 4px; /* Rounded corners */
58-
font-family: 'Courier New', Courier, monospace; /* Monospace font */
59-
color: #333; /* Text color */
60-
}

.vitepress/theme/index.ts

+1
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ import './custom.css'
55
const layouts = import.meta.glob('../../layouts/*.vue')
66

77
export default {
8+
Layout: DefaultTheme.Layout,
89
extends: DefaultTheme,
910
enhanceApp({ app }) {
1011
// Register each layout component

index.md

-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
---
2-
# https://vitepress.dev/reference/default-theme-home-page
32
layout: home
43

54
hero:

layouts/project.vue

+4-3
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<template>
2-
<div class="project-layout">
2+
<div class="project-layout vp-doc">
33
<div class="project-content">
44
<h1>{{ frontmatter.name }}</h1>
55
<Content />
@@ -28,8 +28,9 @@
2828
</template>
2929

3030
<script setup>
31-
import { Content, useData } from 'vitepress'
32-
const { frontmatter } = useData()
31+
import { Content, useData } from 'vitepress';
32+
33+
const { frontmatter } = useData();
3334
</script>
3435

3536
<style>

pages/people/katariaTushar.md

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
layout: person
3+
name: "Tushar Kataria"
4+
role: "PhD Student"
5+
title: "PhD Candidate" # e.g., "PhD Student", "MS Student", "Staff", "Researcher", "Alumni"
6+
org: "University of Utah, SCI Institute"
7+
avatar: "katariaTushar.png" # Replace with the URL to your avatar image
8+
links:
9+
- icon: "github"
10+
link: "https://github.com/tushaarkataria" # Replace with your GitHub profile link
11+
- icon: "linkedin"
12+
link: "https://www.linkedin.com/in/tushar-kataria-05a69456/"
13+
- icon: "website"
14+
link: "https://tushaarkataria.github.io" # Replace with your personal website link
15+
---
16+
17+
# About Krithika Iyer
18+
19+
I am Tushar Kataria, a PhD candidate at University of Utah, Utah in School of Computing. I am in Scientific Computing and Imaging Institute advised by Prof. Shireen Elhabian.
20+
21+
Broader research interests are image processing, medical image processing, deep learning and computer vision. Specifically, I am interested in Semantic Segmentation, Instance Segmentation and Domain Adaptation problems for medical imaging datasets. I am also interested in Statistical Shape Modelling and MultiModal machine learning. Current work involves conditional generation model with application in virtual staining and 3D volume Registration
22+

pages/projects/example.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Testing in an h6
4343

4444
## Testing a code block
4545

46-
```
46+
```js
4747
const jake = "is cool";
4848
console.log(jake.replace("cool", "uncool"));
4949
```

pages/publications.md

+16-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,22 @@ const publications = ref([])
99

1010
onMounted(async () => {
1111
const response = await fetch('/assets/publications.json')
12-
publications.value = await response.json()
12+
const pubs = await response.json()
13+
14+
// TODO: update with better sorting when functionality added to UI
15+
// sort by year, then by first author (alphabetical)
16+
publications.value = pubs.sort((a, b) => {
17+
if (a.year !== b.year) {
18+
return b.year - a.year; // Sort by year in descending order
19+
}
20+
// Sort by first author alphabetically (last name)
21+
const aFirstAuthor = a.authors.split(',')[0]
22+
const bFirstAuthor = b.authors.split(',')[0]
23+
const aLastName = aFirstAuthor.split(' ').pop()
24+
const bLastName = bFirstAuthor.split(' ').pop()
25+
26+
return aLastName.localeCompare(bLastName);
27+
})
1328
})
1429
</script>
1530

pages/publications/uncertain_deepssm.md renamed to pages/publications/2020_uncertain_deepssm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Shape in Medical Imaging (ShapeMI) at MICCAI"
55
year: "2020"
66
link: "https://arxiv.org/abs/2007.06516"
77
image:
8-
src: "uncertain_deepssm.png"
8+
src: "2020_uncertain_deepssm.png"
99
alt: Results Highlight
1010
---
1111

pages/publications/benchmarking.md renamed to pages/publications/2021_benchmarking.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Medical Image Analysis"
55
year: "2021"
66
link: "https://www.sciencedirect.com/science/article/pii/S1361841521003169"
77
image:
8-
src: "benchmarking.jpg"
8+
src: "2021_benchmarking.jpg"
99
alt: Benchmarking FrameWork
1010
---
1111

pages/publications/rvtr.md renamed to pages/publications/2022_rvtr.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ conference: "Frontiers in Physiology"
66
year: "2022"
77
link: "https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2022.908552/full"
88
image:
9-
src: "rvtr.jpg"
9+
src: "2022_rvtr.jpg"
1010
alt: Results Highlight
1111
---
1212

pages/publications/sharedboundary.md renamed to pages/publications/2022_sharedboundary.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Statistical Atlases and Computational Models of the Heart (STACOM)
55
year: "2022"
66
link: "https://pmc.ncbi.nlm.nih.gov/articles/PMC10103081/"
77
image:
8-
src: "stacom_shared.jpg"
8+
src: "2022_stacom_shared.jpg"
99
alt: Results Highlight
1010
---
1111

pages/publications/spatiotemporal_ssm.md renamed to pages/publications/2022_spatiotemporal_ssm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Statistical Atlases and Computational Models of the Heart (STACOM)
55
year: "2022"
66
link: "https://arxiv.org/abs/2209.02736"
77
image:
8-
src: "spatiotemporal_ssm.png"
8+
src: "2022_spatiotemporal_ssm.png"
99
alt: Results Highlight
1010
---
1111

pages/publications/vib_deepssm.md renamed to pages/publications/2022_vib_deepssm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "MICCAI"
55
year: "2022"
66
link: "https://arxiv.org/abs/2205.06862"
77
image:
8-
src: "vib_deepssm.png"
8+
src: "2022_vib_deepssm.png"
99
alt: Results Highlight
1010
---
1111

pages/publications/adassm.md renamed to pages/publications/2023_ADASSM.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Shape in Medical Imaging (ShapeMI) at MICCAI"
55
year: "2023"
66
link: "https://arxiv.org/abs/2307.03273"
77
image:
8-
src: "adassm.png"
8+
src: "2023_ADASSM.png"
99
alt: Results Highlight
1010
---
1111

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
title: "Automating ground truth annotations for gland segmentation through immunohistochemistry"
3+
authors: "Tushar Kataria, Saradha Rajamani, Abdul Bari Ayubi, Mary Bronner, Jolanta Jedrzkiewicz, Beatrice S Knudsen, Shireen Y Elhabian"
4+
conference: "Modern Pathology, Volume 36, Issue 12"
5+
year: "2023"
6+
link: "https://www.sciencedirect.com/science/article/pii/S0893395223002363"
7+
image:
8+
src: "2023_Automated_Annotation_pathology.png"
9+
alt: Automated Annotation
10+
---
11+
12+
# End to End processing Pipeline for Automated Annotation using Immunohistochemistry.
13+
14+
Microscopic evaluation of glands in the colon is of utmost importance in the diagnosis of inflammatory bowel disease and cancer. When properly trained, deep learning pipelines can provide a systematic, reproducible, and quantitative assessment of disease-related changes in glandular tissue architecture. The training and testing of deep learning models require large amounts of manual annotations, which are difficult, time-consuming, and expensive to obtain. Here, we propose a method for automated generation of ground truth in digital hematoxylin and eosin (H&E)–stained slides using immunohistochemistry (IHC) labels. The image processing pipeline generates annotations of glands in H&E histopathology images from colon biopsy specimens by transfer of gland masks from KRT8/18, CDX2, or EPCAM IHC. The IHC gland outlines are transferred to coregistered H&E images for training of deep learning models. We compared the performance of the deep learning models to that of manual annotations using an internal held-out set of biopsy specimens as well as 2 public data sets. Our results show that EPCAM IHC provides gland outlines that closely match manual gland annotations (Dice = 0.89) and are resilient to damage by inflammation. In addition, we propose a simple data sampling technique that allows models trained on data from several sources to be adapted to a new data source using just a few newly annotated samples. The best performing models achieved average Dice scores of 0.902 and 0.89 on Gland Segmentation and Colorectal Adenocarcinoma Gland colon cancer public data sets, respectively, when trained with only 10% of annotated cases from either public cohort. Altogether, the performances of our models indicate that automated annotations using cell type–specific IHC markers can safely replace manual annotations. Automated IHC labels from single-institution cohorts can be combined with small numbers of hand-annotated cases from multi-institutional cohorts to train models that generalize well to diverse data sources.

pages/publications/2023_REU_NSF.md

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
title: "An NSF REU Site Based on Trust and Reproducibility of Intelligent Computation: Experience Report"
3+
authors: "Mary Hall, Ganesh Gopalakrishnan, Eric Eide, Johanna Cohoon, Jeff Phillips, Mu Zhang, Shireen Elhabian, Aditya Bhaskara, Harvey Dam, Artem Yadrov, Tushar Kataria, Amir Mohammad Tavakkoli, Sameeran Joshi, Mokshagna Sai Teja Karanam"
4+
conference: "Proceedings of the SC'23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis"
5+
year: "2023"
6+
link: "https://dl.acm.org/doi/abs/10.1145/3624062.3624100"
7+
---
8+
9+
# NSF REU Experience Report for 2023
10+
11+
This paper presents an overview of an NSF Research Experience for Undergraduate (REU) Site on Trust and Reproducibility of Intelligent Computation, delivered by faculty and graduate students in the Kahlert School of Computing at University of Utah. The chosen themes bring together several concerns for the future in producing computational results that can be trusted: secure, reproducible, based on sound algorithmic foundations, and developed in the context of ethical considerations. The research areas represented by student projects include machine learning, high-performance computing, algorithms and applications, computer security, data science, and human-centered computing. In the first four weeks of the program, the entire student cohort spent their mornings in lessons from experts in these crosscutting topics, and used one-of-a-kind research platforms operated by the University of Utah, namely NSF-funded CloudLab and POWDER facilities; reading assignments, quizzes, and hands-on exercises reinforced the lessons. In the subsequent five weeks, lectures were less frequent, as students branched into small groups to develop their research projects. The final week focused on a poster presentation and final report. Through describing our experiences, this program can serve as a model for preparing a future workforce to integrate machine learning into trustworthy and reproducible applications.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
title: "Structural cycle gan for virtual immunohistochemistry staining of gland markers in the colon"
3+
authors: "Shikha Dubey,Tushar Kataria, Beatrice S Knudsen, Shireen Y. Elhabian"
4+
conference: "International Workshop on Machine Learning in Medical Imaging"
5+
year: "2023"
6+
link: "https://link.springer.com/chapter/10.1007/978-3-031-45676-3_45"
7+
image:
8+
src: "2023_SCGAN_virtual_staining.png"
9+
alt: Structural CycleGAN
10+
---
11+
12+
# Propose a new architecture for Virtual Staining regularizing the learning via structural consistency.
13+
14+
With the advent of digital scanners and deep learning, diagnostic operations may move from a microscope to a desktop. Hematoxylin and Eosin (H &E) staining is one of the most frequently used stains for disease analysis, diagnosis, and grading, but pathologists do need different immunohistochemical (IHC) stains to analyze specific structures or cells. Obtaining all of these stains (H &E and different IHCs) on a single specimen is a tedious and time-consuming task. Consequently, virtual staining has emerged as an essential research direction. Here, we propose a novel generative model, Structural Cycle-GAN (SC-GAN), for synthesizing IHC stains from H &E images, and vice versa. Our method expressly incorporates structural information in the form of edges (in addition to color data) and employs attention modules exclusively in the decoder of the proposed generator model. This integration enhances feature localization and preserves contextual information during the generation process. In addition, a structural loss is incorporated to ensure accurate structure alignment between the generated and input markers. To demonstrate the efficacy of the proposed model, experiments are conducted with two IHC markers emphasizing distinct structures of glands in the colon: the nucleus of epithelial cells (CDX2) and the cytoplasm (CK818). Quantitative metrics such as FID and SSIM are frequently used for the analysis of generative models, but they do not correlate explicitly with higher-quality virtual staining results. Therefore, we propose two new quantitative metrics that correlate directly with the virtual staining specificity of IHC markers.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
title: "To pretrain or not to pretrain? A case study of domain-specific pretraining for semantic segmentation in histopathology"
3+
authors: "Tushar Kataria, Beatrice S Knudsen, Shireen Y. Elhabian"
4+
conference: "Workshop on Medical Image Learning with Limited and Noisy Data"
5+
year: "2023"
6+
link: "https://link.springer.com/chapter/10.1007/978-3-031-44917-8_24"
7+
image:
8+
src: "2023_benchmarking_pathology_pretraining.png"
9+
alt: Benchmarking Semantic Segmentation
10+
---
11+
12+
# Benchmarking Whether Domain Specific pretraining helps in better gland and cell segmentation in histopathology
13+
14+
Annotating medical imaging datasets is costly, so fine-tuning (or transfer learning) is the most effective method for digital pathology vision applications such as disease classification and semantic segmentation. However, due to texture bias in models trained on real-world images, transfer learning for histopathology applications might result in underperforming models, which necessitates the need for using unlabeled histopathology data and self-supervised methods to discover domain-specific characteristics. Here, we tested the premise that histopathology-specific pretrained models provide better initializations for pathology vision tasks, i.e., gland and cell segmentation. In this study, we compare the performance of gland and cell segmentation tasks with histopathology domain-specific and non-domain-specific (real-world images) pretrained weights. Moreover, we investigate the dataset size at which domain-specific pretraining produces significant gains in performance. In addition, we investigated whether domain-specific initialization improves the effectiveness of out-of-distribution testing on distinct datasets but the same task. The results indicate that performance gain using domain-specific pretrained weights depends on both the task and the size of the training dataset. In instances with limited dataset sizes, a significant improvement in gland segmentation performance was also observed, whereas models trained on cell segmentation datasets exhibit no improvement.

pages/publications/benchmarking_segmentation.md renamed to pages/publications/2023_benchmarking_segmentation.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Unsure Workshop at MICCAI"
55
year: "2023"
66
link: "https://arxiv.org/abs/2308.07506"
77
image:
8-
src: "benchmarking_segmentation.png"
8+
src: "2023_benchmarking_segmentation.png"
99
alt: Results Highlight
1010
---
1111

pages/publications/bvib_deepssm.md renamed to pages/publications/2023_bvib_deepssm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "MICCAI"
55
year: "2023"
66
link: "https://arxiv.org/pdf/2305.05797.pdf"
77
image:
8-
src: "bvib_deepssm.png"
8+
src: "2023_bvib_deepssm.png"
99
alt: Results Highlight
1010
---
1111

pages/publications/can_pointclouds.md renamed to pages/publications/2023_can_pointclouds.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "MICCAI"
55
year: "2023"
66
link: "https://arxiv.org/abs/2305.05610"
77
image:
8-
src: "can_pointclouds.png"
8+
src: "2023_can_pointclouds.png"
99
alt: Results Highlight
1010
---
1111

pages/publications/frontiers_sharedboundary.md renamed to pages/publications/2023_frontiers_sharedboundary.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Frontiers in Bioengineering and Biotechnology"
55
year: "2023"
66
link: "https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2022.1078800/full"
77
image:
8-
src: "frontiers_shared.jpg"
8+
src: "2023_frontiers_shared.jpg"
99
alt: Results Highlight
1010
---
1111

pages/publications/frontiers_spatiotemporal_ssm.md renamed to pages/publications/2023_frontiers_spatiotemporal_ssm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "Frontiers in Bioengineering and Biotechnology"
55
year: "2023"
66
link: "https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2023.1086234/full"
77
image:
8-
src: "frontiers_spatiotemporal_ssm.jpg"
8+
src: "2023_frontiers_spatiotemporal_ssm.jpg"
99
alt: Results Highlight
1010
---
1111

pages/publications/mesh2ssm.md renamed to pages/publications/2023_mesh2ssm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ conference: "MICCAI 2023"
55
year: "2023"
66
link: "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=MN0NWL0AAAAJ&citation_for_view=MN0NWL0AAAAJ:W7OEmFMy1HYC"
77
image:
8-
src: "mesh2ssm.png"
8+
src: "2023_mesh2ssm.png"
99
alt: Mesh2SSM Model
1010
---
1111

pages/publications/2024_MASSM.md

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
title: "MASSM: An End-to-End Deep Learning Framework for Multi Anatomy Statistical Shape Modeling Directly From Images"
3+
authors: "Janmesh Ukey, Tushar Kataria, Shireen Y Elhabian"
4+
conference: "International Workshop on Shape in Medical Imaging"
5+
year: "2024"
6+
link: "https://link.springer.com/chapter/10.1007/978-3-031-75291-9_12"
7+
image:
8+
src: "2024_MASSM.png"
9+
alt: Automated Annotation
10+
---
11+
12+
# A method for multi-anatomy SSM directly from Images.
13+
14+
Statistical shape modeling (SSM) effectively analyzes anatomical variations within populations but is limited by the need for manual localization and segmentation, which relies on scarce medical expertise. Recent advances in deep learning have provided a promising approach that automatically generates statistical representations (as point distribution models or PDMs) from unsegmented images. Once trained, these deep learning-based models eliminate the need for manual segmentation for new subjects. Most deep learning methods still require manual pre-alignment of image volumes and bounding box specifications around the target anatomy, leading to a partially manual inference process. Recent approaches facilitate anatomy localization but only estimate population-level statistical representations and cannot directly delineate anatomy in images. Additionally, they are limited to modeling a single anatomy. We introduce MASSM, a novel end-to-end deep learning framework that simultaneously localizes multiple anatomies, estimates population-level statistical representations, and delineates shape representations directly in image space. Our results show that MASSM, which delineates anatomy in image space and handles multiple anatomies through a multitask network, provides superior shape information compared to segmentation networks for medical imaging tasks. Estimating SSM is a stronger task than segmentation because it encodes a more robust statistical prior for the objects to be detected and delineated. MASSMallows for more accurate and comprehensive shape representations, surpassing the capabilities of traditional pixel-wise segmentation.

0 commit comments

Comments
 (0)