Document Purpose: Synthesize critical evaluation and peer review simulation into actionable, priority-ranked recommendations Target Outcome: Elevate proposal from current top 10-15% to top 3-5% tier Evaluation Baseline: Current composite score 7.8/9.0 → Target 8.5-9.0/9.0
- Composite Score: 7.8/9.0 (Top 10-15% currently)
- Success Probability: 65-75%
- Status: Fundable with major revisions, but NOT in top 5% tier
- Projected Composite Score: 8.5-9.0/9.0 (Top 3-5%)
- Projected Success Probability: 85-95%+
- Status: Highly competitive, likely funding with potential for exceptional rating
BARRIER #1: Investigator Credibility Gap (-1.5 to -2.0 points)
- Zero named investigators, zero track record, zero preliminary data
- Reviewers cannot assess feasibility for $50M, 7-year, 50-site study without PI credentials
- Fix: Name PI + Co-Is, add CVs, preliminary data, letters of support
BARRIER #2: INCITE Model Status Ambiguity (-1.0 to -1.5 points)
- NeuroX-Fusion 130B described as existing but not cited or validated
- Unclear if model is real (low risk) or must be built from scratch (high risk, +$10M, +12 months)
- Fix: Clarify model status, provide citation/preliminary results OR add pre-training aim
BARRIER #3: 50-Site Coordination Feasibility (-0.5 to -1.0 points)
- Logistics of recruiting/retaining 50 sites across 5 continents severely underestimated
- No site management plan, no budget breakdown, no attrition modeling
- Fix: Add detailed coordination plan, site recruitment strategy, governance structure
Timeline to Fix: 4-8 weeks (with dedicated effort)
ROI: Fixing these 3 barriers → +2.5 to +3.5 points → 7.8 → 10.3-11.3 (capped at 9.0) = 8.5-9.0 realistic score
Definition: These fixes are non-negotiable. Without them, proposal cannot reach top 5% regardless of other strengths.
Estimated Impact: +2.0 to +2.5 points overall (7.8 → 9.8-10.3, realistically 8.5-9.0)
Timeline: 4-6 weeks with full team effort
Current State: No investigators named, no CVs, no publication lists
Required Fix:
Step 1: Name Principal Investigator (PI)
Required PI Profile:
- Senior investigator (Professor level, 15+ years experience)
- Multi-site expertise: Led 5+ multi-site studies (ideally 10+ sites, 1,000+ participants)
- Autism/DD expertise: 50+ autism/DD publications, h-index ≥50
- Funding track record: $20M+ in prior NIH/NSF grants (10+ R01-equivalent awards)
- ADOS-2/neuroimaging credentials: Gold-standard diagnostic training OR multi-site neuroimaging leadership
Example PI Profile (Hypothetical):
Dr. Jane Smith, PhD Professor of Psychiatry and Neuroscience, University of [X] Director, Center for Autism Research Excellence
Track Record:
- 25 years autism research experience
- Principal Investigator, ENIGMA-Autism Working Group (40 sites, 15,000+ participants)
- Co-Investigator, ABIDE consortium (multiple sites, 1,200+ participants)
- 180 peer-reviewed publications (h-index 85, 25,000+ citations)
- $35M in NIH/NSF funding over 15 years (12 R01s, 3 P50 center grants)
- ADOS-2 certified evaluator (2005), trainer (2010)
- 15+ PhD students graduated (12 now faculty at R1 universities)
Preliminary Work Relevant to This Proposal:
- Pilot multimodal fusion study (n=100, AUC 0.88) - manuscript in preparation
- Federated learning simulation on ABIDE (89% inter-site accuracy) - presented at OHBM 2024
- INCITE compute allocation secured (3M core-hours on Aurora, 2025-2026)
Step 2: Name 4-6 Co-Investigators
Required Expertise Coverage (at minimum):
Co-I #1: AI/ML Foundation Model Expert
- Requirements:
- Foundation model development experience (ideally 10B+ parameter models)
- Federated learning publications (5+ papers in NeurIPS, ICML, ICLR)
- Medical AI deployment experience (bonus: clinical ML systems)
- Example: Former Google Brain/Meta AI/OpenAI researcher OR academic with h-index ≥40 in ML
Co-I #2: Child Psychiatrist / Clinical Trials Expert
- Requirements:
- ADOS-2/ADI-R certified evaluator (essential)
- 10+ autism clinical trials as PI or site PI
- Pragmatic trial experience (pRCT design, real-world effectiveness)
- Example: Academic child psychiatrist with 100+ autism patients evaluated, 20+ RCTs
Co-I #3: Genetic Epidemiologist / Genomics Expert
- Requirements:
- WES analysis expertise (GATK pipeline, rare variant burden tests)
- GWAS experience (preferably autism GWAS, e.g., Grove et al. 2019 co-author)
- Causal inference in genomics (Mendelian randomization)
- Example: Investigator with 50+ genetics papers, h-index ≥30
Co-I #4: Neuroimaging Expert / Multi-Site Coordination
- Requirements:
- Multi-site neuroimaging leadership (preferably ENIGMA, ABIDE, or equivalent)
- FreeSurfer, fMRI preprocessing, quality control expertise
- Harmonization experience (ComBat, traveling phantom, etc.)
- Example: ENIGMA working group leader OR ABIDE contributor with 30+ neuroimaging papers
Co-I #5: Regulatory Scientist / FDA Consultant
- Requirements:
- FDA De Novo submission experience (5+ successful submissions)
- Former FDA reviewer (bonus) OR regulatory affairs VP at medical device company
- AI/ML SaMD expertise (21st Century Cures Act, FDA AI/ML guidance 2024)
- Example: Regulatory consultant with 10+ SaMD approvals, ISO 13485 QMS expertise
Co-I #6 (Optional): Biostatistician / Adaptive Trial Expert
- Requirements:
- Bayesian adaptive trial design (interim analyses, futility/efficacy stopping)
- Multi-site cluster randomization expertise
- Federated learning statistics (differential privacy, site heterogeneity modeling)
- Example: Biostatistician with 50+ clinical trial papers, DSMB membership experience
Step 3: Provide CVs and Biosketches
For Each Investigator:
- NIH Biosketch (5 pages): Education, positions, honors, contributions to science, research support
- Key Publications (15 most relevant, with YOUR role highlighted)
- Prior Funding (Active and completed grants, total $$, your role)
- Preliminary Work (section in biosketch OR separate 2-3 page document)
Step 4: Add Preliminary Data
Minimum Viable Preliminary Data:
Option A: Pilot Multimodal Fusion Study (n=50-100)
- Sample: 50 ASD, 50 TD controls (from single site)
- Modalities: sMRI + fMRI (minimum), ideally + EEG/genomics
- Analysis: Multimodal fusion (early/intermediate/late fusion comparison)
- Results: AUC 0.85-0.90 (proof-of-concept that multimodal beats unimodal)
- Status: "Manuscript in preparation" OR "Presented at INSAR 2024"
Option B: Federated Learning Simulation (ABIDE Dataset)
- Sample: ABIDE dataset (1,112 participants, 17 sites)
- Method: Simulate federated learning (site-by-site training, FedAvg aggregation)
- Analysis: Leave-one-site-out cross-validation (17-fold)
- Results: Inter-site accuracy 85-90% (demonstrates FL feasibility)
- Status: "Presented at OHBM 2024" OR "Submitted to NeuroImage"
Option C: LoRA Fine-Tuning on Existing Foundation Model (BrainLM)
- Sample: n=50 DD patients (ASD or ADHD)
- Method: Fine-tune BrainLM (or BrainOmni for EEG) with LoRA (rank=8)
- Analysis: Compare LoRA (r=8) vs. full fine-tuning vs. zero-shot
- Results: LoRA achieves 95% of full fine-tuning performance with 1% parameters
- Status: "Manuscript in preparation"
Gold Standard Preliminary Data (if time/resources allow):
- All 3 of the above (multimodal pilot, FL simulation, LoRA fine-tuning)
- Plus: INCITE allocation secured + preliminary NeuroX-Fusion 130B results (even on non-DD task)
- Plus: 5-10 site letters of intent (for multi-site recruitment feasibility)
Step 5: Add Letters of Support
Required Letters (Minimum):
Letter #1: INCITE Program Director / Aurora Compute Allocation
- From: DOE INCITE program director OR Aurora supercomputer allocation manager
- Content:
- Confirms compute allocation (e.g., "3M core-hours on Aurora for 2025-2026")
- Supports scientific merit of NeuroX-Fusion 130B pre-training
- States timeline (e.g., "Pre-training expected to complete Q2 2026")
- Critical: Without this letter, reviewers will assume INCITE model doesn't exist
Letter #2-6: Site Commitment Letters (5 sites minimum)
- From: Site PIs at 5 diverse locations (e.g., US academic, EU academic, Asia community clinic, US rural, Latin America)
- Content:
- Commits to participate (recruit 60 participants over 5 years)
- States IRB approval timeline (e.g., "IRB approval expected within 6 months of funding")
- Confirms site capabilities (MRI scanner, EEG lab, genomics partnership, ADOS-2 certified staff)
- Requests co-authorship, site-specific analyses, $100K site funding
- Impact: Demonstrates feasibility of 50-site recruitment (if 5 sites already committed, scaling to 50 is credible)
Letter #3-4: Advisory Board (2-3 senior leaders)
- From: Renowned autism researchers NOT on investigator team (e.g., SFARI director, IACC member, INSAR president)
- Content:
- Endorses scientific approach ("This multimodal federated learning approach is innovative and timely")
- Confirms unmet need ("Early diagnosis and precision medicine for autism are critical gaps")
- States willingness to serve on Scientific Advisory Board (quarterly meetings)
- Impact: Signals field endorsement (reviewers trust advisory board judgment)
Letter #5: FDA Regulatory Consultant
- From: Regulatory expert (former FDA reviewer OR consultant with 5+ SaMD approvals)
- Content:
- Confirms De Novo pathway feasibility ("Based on Canvas Dx precedent, De Novo is appropriate")
- Estimates timeline to FDA clearance ("7-10 years realistic with robust pRCT validation")
- States willingness to consult on regulatory strategy (pre-submission meetings, IDE/De Novo submissions)
- Impact: De-risks regulatory pathway (reviewers see expert guidance)
Letter #6 (Optional): Patient Advocacy Organization
- From: Autism Self-Advocacy Network (ASAN), Autistic Women & Nonbinary Network, or similar
- Content:
- Endorses patient-centered outcomes (early diagnosis reduces family stress)
- Confirms advisory role (autistic adults on study design team)
- Raises ethical considerations (risk of labeling, need for genetic counseling)
- Impact: Shows community buy-in, addresses ethical concerns proactively
Deliverables for Fix 1.1:
- Named PI + Co-Is (6 investigators total)
- CVs/Biosketches (5 pages each, NIH format)
- Preliminary data report (5-10 pages: pilot study, FL simulation, OR LoRA results)
- Letters of support (6 minimum: INCITE, 5 sites, 2 advisory, 1 regulatory)
Estimated Effort: 4-6 weeks (if investigators already identified and willing)
Estimated Cost: $0 (if investigators volunteer) to $50K (if preliminary data collection needed)
Impact on Score:
- Investigators dimension: 7.2 → 8.5-9.0 (+1.3 to +1.8 points)
- Overall composite: 7.8 → 8.2-8.5 (+0.4 to +0.7 points)
Current State: Model described in detail but not cited, no preliminary results, unclear if exists
Required Fix:
Decision Point: Does NeuroX-Fusion 130B Exist?
Option A: Model Exists (Lower Risk, Preferred)
Action Items:
-
Cite the model:
- Add citation: Paper, technical report, ArXiv preprint, OR DOE INCITE program website
- Example: "NeuroX-Fusion 130B (Smith et al., 2025, ArXiv:2501.XXXXX)"
- If no public paper yet: "NeuroX-Fusion 130B (INCITE 2025, https://www.alcf.anl.gov/incite/...)"
-
Provide preliminary results:
- Show performance on any task (even general neuroscience, non-DD)
- Examples:
- "NeuroX-Fusion 130B achieves 0.92 AUC on BrainLM benchmark (predicting age, sex, cognitive scores from fMRI)"
- "Zero-shot transfer to ABIDE: 78% accuracy (vs. BrainLM 75%, random 50%)"
- Include 1-2 figures: Performance vs. model size, performance vs. training data size
-
Confirm access:
- Attach INCITE allocation letter (see Fix 1.1, Letter #1)
- State license: "Open-source under MIT license" OR "Proprietary, but we have license to fine-tune and commercialize"
-
Add architecture details:
- Currently vague: "SwiFT 4D + Channel-equivariant + BrainOmni" - how are these integrated?
- Add architecture diagram (1 page): Show how 3 sub-models are combined (ensemble? modular? hybrid?)
- Clarify parameter breakdown: SwiFT (15B) + Channel-eq (30B) + BrainOmni (85B) = 130B (additive?) OR 130B total (shared parameters?)
Option B: Model Doesn't Exist (Higher Risk, Must Build)
Action Items:
-
Add Specific Aim 1: "Pre-train NeuroX-Fusion 130B Foundation Model"
- Move this from "Background" to explicit research aim
- Timeline: 12-18 months (Year 1-2)
- Milestones:
- Month 1-6: Data curation (ABIDE, ADHD-200, NDAR, HCP - total 50,000+ scans)
- Month 7-12: Model training on Aurora (100 epochs, 10-15 days compute)
- Month 13-18: Validation on held-out datasets, zero-shot transfer tests
-
Add budget for pre-training:
- Compute: $5-10M (Aurora allocation OR cloud TPUs if INCITE unavailable)
- Data licensing: $1-2M (NDAR, HCP data use agreements)
- Personnel: $1M (2 ML engineers × 2 years × $250K each)
- Total: $7-13M (add to $50M total → $57-63M)
-
Add fallback plan:
- If Aurora unavailable or delayed: Use Google Cloud TPU v5 (estimate $3-5M for 130B model training)
- If 130B infeasible: Fall back to BrainLM (3,662 subjects, 3B parameters, existing, open-source)
- If compute budget insufficient: Reduce model size to 13B (10× smaller, 10× faster, $500K-1M compute)
-
Add risk mitigation:
- Risk: 130B model training fails (technical issues, compute unavailable, insufficient data)
- Mitigation: Start with smaller model (13B) in Year 1, scale to 130B in Year 2 if successful
- Contingency: BrainLM (existing) ensures project can proceed even without custom foundation model
Recommendation: If NeuroX-Fusion 130B doesn't exist, seriously consider using BrainLM (existing) + LoRA fine-tuning instead of building 130B from scratch. This reduces risk, timeline, and cost significantly.
Deliverables for Fix 1.2:
- If model exists: Citation, preliminary results (1-2 pages), architecture diagram (1 page), INCITE allocation letter
- If model doesn't exist: New Specific Aim 1 (pre-training, 3-5 pages), revised budget (+$7-13M), fallback plan (1-2 pages), risk mitigation (1 page)
Estimated Effort: 1-2 weeks (if model exists) OR 3-4 weeks (if must add pre-training aim)
Impact on Score:
- Innovation dimension: 8.5 → 9.0 (+0.5 points) if model validated
- Approach dimension: 7.5 → 8.5 (+1.0 points) if feasibility de-risked
- Overall composite: 7.8 → 8.3-8.5 (+0.5 to +0.7 points)
Current State: "50 sites" mentioned but no recruitment plan, no retention strategy, no governance structure
Required Fix:
Step 1: Site Recruitment Plan (3-5 pages)
Site Eligibility Criteria:
- Academic or clinical site with autism diagnostic capabilities
- MRI scanner (minimum 1.5T, pediatric imaging capabilities)
- ADOS-2 certified evaluator on staff (or willing to get certified)
- IRB capacity to approve multi-site study within 6-12 months
- Minimum patient volume: 60 DD diagnoses/year (to recruit 60 over 5 years)
- Genomics access: On-site lab OR partnership with external genomics core
- IT infrastructure: Secure data transfer (SFTP, federated learning server connection)
Recruitment Strategy:
-
Phase 1 (Months 1-6): Recruit 10 "anchor sites" (well-established partners, high capacity)
- Leverage PI's existing networks (ENIGMA, ABIDE contacts)
- Target tier-1 academic medical centers (e.g., UCLA, Stanford, Yale, MGH in US; Oxford, KCL in UK; Seoul National University in Korea)
- Deliverable: 10 site commitment letters
-
Phase 2 (Months 7-12): Recruit 20 "core sites" (mix of academic + large community clinics)
- Advertise at conferences (INSAR, IMFAR, OHBM, ACNP)
- Direct outreach to autism research centers (SFARI grantees, Autism Centers of Excellence)
- Deliverable: 20 site commitment letters
-
Phase 3 (Months 13-18): Recruit 20 "diversity sites" (rural, low-resource, international)
- Partner with global health organizations (WHO, UNICEF regional offices)
- Offer resource-sharing (central genomics core, traveling EEG units)
- Deliverable: 20 site commitment letters
Recruitment Incentives:
- Scientific: Co-authorship on main papers (ICMJE criteria), site-specific analyses for local publications
- Financial: $100K/site/5 years ($20K/year for staff time, patient recruitment)
- Training: Central IRB, ADOS-2 training, federated learning technical support
- Data: Sites retain access to their own data + aggregate de-identified global dataset (for secondary analyses)
Recruitment Timeline:
- Target: 50 sites recruited within 18 months
- Assumption: 20-30% attrition over 5 years → recruit 65 sites initially to ensure 50 complete
- Replacement strategy: Waitlist of 10-15 backup sites (if primary site drops, activate backup)
Step 2: Site Retention Plan (2-3 pages)
Retention Strategies:
- Regular communication: Monthly site PI calls, quarterly steering committee meetings
- Progress transparency: Real-time dashboards showing recruitment, data quality, federated model performance (by site)
- Recognition: Acknowledge top-performing sites (fastest recruitment, highest data quality) in newsletters
- Flexibility: Allow sites to pause recruitment if capacity issues (e.g., COVID-19-like disruptions)
Site Support:
- Central IRB: Single IRB protocol for all US sites (reduces local IRB burden from 12 months to 2-3 months)
- Data management training: 2-day on-site training (or virtual) for research coordinators
- Technical support: 24/7 federated learning server support, MRI quality control feedback
- Troubleshooting: Dedicated project manager for each region (US, Europe, Asia, Latin America, Africa)
Attrition Assumptions:
- Expected attrition: 20-30% over 5 years (typical for long studies)
- Reasons: PI leaves institution, loss of funding, IRB issues, loss of ADOS-2 staff
- Mitigation: Over-recruit to 65 sites, maintain waitlist of 10-15 backups
Step 3: Governance Structure (2-3 pages)
Steering Committee:
- Composition: PI (chair) + 5 site leads (1 per continent) + 2 advisory board members + NIH program officer (ex officio)
- Responsibilities:
- Approve major protocol changes (e.g., add new modality, change inclusion criteria)
- Review interim analyses (Bayesian adaptive design stopping rules)
- Resolve site conflicts (e.g., data quality issues, authorship disputes)
- Meetings: Quarterly (in-person at conferences OR virtual)
Data Coordinating Center (DCC):
- Location: Host institution (PI's university)
- Staffing: 5 FTE (project manager, data manager, biostatistician, federated learning engineer, QC analyst)
- Responsibilities:
- Data quality monitoring (MRI QC, genomics QC, outlier detection)
- Federated learning server management (model aggregation, site-specific fine-tuning)
- Statistical analyses (primary/secondary outcomes, interim analyses)
- Regulatory compliance (IRB renewals, FDA reporting)
Publication Policy:
- Main Papers: ICMJE authorship (substantial contribution, draft/critical revision, final approval)
- Anticipated: 10-15 main papers (all site PIs co-authors if recruited ≥10 participants)
- Site-Specific Papers: Sites can publish their own data (single-site analyses) with DCC co-authorship
- Authorship Order: Alphabetical by site OR contribution-based (decided by steering committee)
Step 4: Budget Breakdown (1-2 pages)
Current Budget: "5000백만원" ($50M) total - but no breakdown
Detailed Budget:
| Category | Amount ($M) | Justification |
|---|---|---|
| Site Payments | $25M | 50 sites × $100K/site × 5 years (patient recruitment, staff time, local IRB) |
| Data Coordinating Center | $5M | 5 FTE × 5 years × $200K/FTE (salaries, benefits, overhead) |
| Compute (INCITE/Cloud) | $10M | Aurora pre-training ($5M) + DGX fine-tuning ($2M) + cloud backup ($3M) |
| Clinical Trial (pRCT) | $5M | 10 sites × $500K/site (pRCT-specific costs, ADOS-2 assessments, regulatory) |
| Genomics | $2M | WES for 2,000 participants ($1,000/sample) |
| Regulatory/FDA | $2M | FDA submission ($500K), ISO 13485 QMS ($500K), regulatory consultant ($1M) |
| Contingency (20%) | $10M | Unexpected costs (site dropout, compute overruns, regulatory delays) |
| TOTAL | $59M | (Revised from $50M) |
Justification for Budget Increase:
- Original $50M underestimated genomics ($2M) and contingency ($10M)
- Recommend: Request $60M OR de-scope to $50M (reduce sites to 30, genomics to n=1,000)
Deliverables for Fix 1.3:
- Site recruitment plan (3-5 pages): Eligibility, strategy, timeline, incentives
- Site retention plan (2-3 pages): Retention strategies, support, attrition assumptions
- Governance structure (2-3 pages): Steering committee, DCC, publication policy
- Budget breakdown (1-2 pages): Detailed line items totaling $50M (or revised to $60M)
Estimated Effort: 2-3 weeks
Impact on Score:
- Approach dimension: 7.5 → 8.2-8.5 (+0.7 to +1.0 points)
- Environment dimension: 7.8 → 8.5 (+0.7 points)
- Overall composite: 7.8 → 8.2-8.5 (+0.4 to +0.7 points)
TOTAL IMPACT OF PRIORITY 1 FIXES:
Before:
- Composite Score: 7.8/9.0 (Top 10-15%)
- Success Probability: 65-75%
After Priority 1:
- Composite Score: 8.3-8.7/9.0 (Top 5-8%)
- Success Probability: 80-90%
Estimated Total Effort: 6-10 weeks (parallel work on all 3 fixes)
Estimated Total Cost: $50K-100K (preliminary data collection, letter solicitation)
Definition: These improvements aren't mandatory, but significantly strengthen competitiveness and polish.
Estimated Impact: +0.3 to +0.5 points overall (8.3-8.7 → 8.6-9.0)
Timeline: 2-4 weeks additional effort
Issue: Power calculations assume independent observations, but 50 sites = 50 clusters (non-independent)
Fix:
-
Estimate site-level ICC from ABIDE/ADHD-200:
- Download ABIDE data (public, n=1,112, 17 sites)
- Fit mixed-effects model: Diagnosis ~ Brain_Features + (1|Site)
- Extract ICC = Var(Site) / [Var(Site) + Var(Residual)]
- Expected ICC: 0.05-0.15 (typical for multi-site neuroimaging)
-
Calculate design effect:
- Design effect = 1 + (m-1) × ICC
- Where m = average cluster size = 60 participants/site
- Example: ICC=0.10 → DE = 1 + 59×0.10 = 6.9
-
Recalculate effective sample size:
- n_eff = n / DE = 3,000 / 6.9 = 435
-
Recalculate power:
- Original: n=3,000, power >99% for d=0.5
- Cluster-adjusted: n_eff=435, power = ? (use G*Power or simulation)
- If power drops below 80%: Increase sample size OR reduce ICC (add covariates)
-
Add to proposal:
- Section: "Cluster-Adjusted Power Analysis" (1-2 pages)
- Table: Power for different ICC values (0.05, 0.10, 0.15)
- Sensitivity: If power inadequate, mitigation strategies (increase n, add fixed effects for scanner type)
Deliverable: 1-2 pages added to "Statistical Methods" section
Effort: 1 week (download ABIDE, run analysis, add to proposal)
Impact: Approach score: 7.5 → 8.5 (+1.0 points)
Issue: FDA requires performance stratified by demographic subgroups (21st Century Cures Act), but proposal lacks fairness plan
Fix:
Step 1: Define Demographic Subgroups
- Race/Ethnicity: White, Black/African American, Hispanic/Latino, Asian, Native American, Other/Mixed
- Sex: Male, Female
- Age: 0-2 years, 2-5 years, 5-10 years, 10-18 years
- Socioeconomic Status: Low (<$50K), Middle ($50K-$150K), High (>$150K)
- Geographic: Urban, Suburban, Rural
Step 2: Define Fairness Metrics
- Demographic Parity: P(Predicted ASD | Subgroup A) ≈ P(Predicted ASD | Subgroup B)
- Equal Opportunity: Sensitivity should be equal across subgroups (no group has lower true positive rate)
- Equalized Odds: Sensitivity AND specificity equal across subgroups
- Calibration: Predicted probabilities match observed outcomes across subgroups
Step 3: Fairness Analysis Plan
- Primary Fairness Metric: Equalized odds (FDA preference)
- Acceptable Disparity: ≤5 percentage points difference in sensitivity/specificity across subgroups
- Example: Sensitivity in White = 95%, Black = 92% → 3-point gap (acceptable)
- If Disparity >5 Points: Apply bias mitigation
- Re-weighting: Oversample underrepresented groups in training
- Adversarial debiasing: Add fairness constraint to loss function
- Group-specific thresholds: Optimize decision threshold per subgroup
Step 4: Add to Proposal
- Section: "Algorithmic Fairness and Health Equity" (2-3 pages)
- Table: Expected sample size per subgroup (to ensure adequate power for fairness analysis)
- Figure: Schematic of fairness analysis pipeline
Deliverable: 2-3 pages added to "Approach" section
Effort: 1 week
Impact: Approach score: +0.3, addresses FDA requirement
Issue: Real-world deployments will have missing modalities (e.g., no genomics due to cost). Performance degradation unclear.
Fix:
Step 1: Simulate Missing Modality Scenarios
- Scenario 1: All 5 modalities (sMRI, fMRI, EEG, genomics, digital) → Baseline AUC 0.92-0.95
- Scenario 2: 4 modalities (drop genomics, most expensive) → AUC = ?
- Scenario 3: 3 modalities (drop genomics + EEG) → AUC = ?
- Scenario 4: 2 modalities (imaging only: sMRI + fMRI) → AUC = ?
- Scenario 5: 1 modality (digital only, cheapest/most scalable) → AUC = ?
Step 2: Estimate Performance from Literature
- 5 modalities: 0.92-0.95 (proposed, multimodal synergy)
- 4 modalities: 0.90-0.92 (slight drop)
- 3 modalities: 0.88-0.90 (moderate drop)
- 2 modalities (imaging): 0.85-0.87 (CCTF benchmark: 0.82-0.87)
- 1 modality (digital): 0.88-0.90 (ADHD wearables: 0.89-0.95)
Step 3: Clinical Decision Thresholds
- Tier 1 screening (digital only): AUC 0.88-0.90 acceptable (high sensitivity, moderate specificity)
- Tier 2 confirmation (imaging + genomics): AUC 0.92-0.95 required (high sensitivity AND specificity)
- Minimum acceptable AUC: 0.85 (FDA Canvas Dx has 81.6% specificity, ~0.90 AUC estimated)
Step 4: Add to Proposal
- Section: "Missing Modality Robustness Analysis" (1-2 pages)
- Table: AUC across modality combinations (with 95% CI)
- Figure: Performance degradation curve (AUC vs. number of modalities)
Deliverable: 1-2 pages added to "Approach" section
Effort: 1 week
Impact: Approach score: +0.2, demonstrates real-world robustness
Issue: Some impact metrics are inflated and hurt credibility
Fix:
Inflated Claim #1: "40-60 Nature/Science papers"
- Reality Check: Large consortia publish 5-10 Nature/Science papers over 10 years (ENIGMA: ~10, HCP: ~8)
- Revised Estimate: 10-15 high-impact papers (Nature, Science, Nature Medicine, JAMA, Lancet)
- 5 main outcomes papers (diagnosis, subtyping, genomics, causal inference, pRCT)
- 5 methods papers (foundation model, federated learning, multimodal fusion)
- 5 secondary analyses (sex differences, developmental trajectories, treatment response)
- Plus: 30-40 total papers (including mid-tier journals: NeuroImage, Biological Psychiatry, Autism Research)
Inflated Claim #2: "10-20% global market share within 5 years of FDA approval"
- Reality Check: Canvas Dx (4 years post-FDA): estimated 5-10% US market penetration
- Revised Estimate: 5-10% global market share within 5 years
- Optimistic: 10% ($50-80M annual revenue)
- Realistic: 5% ($25-40M annual revenue)
- Pessimistic: 2-3% ($10-24M annual revenue, niche player)
Inflated Claim #3: "조기중재로 발달 전환 2-3배 개선"
- Basis: Early intervention literature shows 20-50% symptom reduction (not 2-3× improvement)
- Revised Estimate: "30-50% improvement in developmental outcomes (IQ, adaptive functioning, symptom severity) compared to standard care"
Deliverable: Revise "Expected Outcomes" section (1-2 pages) with conservative estimates
Effort: 1-2 days
Impact: Impact score: 8.2 → 8.5 (+0.3 points, improved credibility)
Issue: Ethical issues (early diagnosis, genetic counseling) and FDA requirements (risk management) are under-addressed
Fix:
Step 1: Ethical Framework (2-3 pages)
Issue #1: Early Diagnosis at 6-12 Months - Ethical Considerations
- Pro: Enables early intervention during peak neuroplasticity
- Con: Risk of labeling, family anxiety, false positives
- Mitigation:
- Genetic counseling for all families (pre-test and post-test)
- Clear communication: "High risk" ≠ "definite diagnosis" (only ADOS-2 at 24 months is diagnostic)
- Psychosocial support: Parent support groups, mental health resources
Issue #2: Incidental Findings (Genomics + Imaging)
- ACMG SF v3.0: Must report actionable secondary findings (cancer genes, cardiac genes)
- Plan:
- Pre-test counseling: Families opt-in to receive incidental findings
- Reporting protocol: Clinical geneticist reviews all WES, reports ACMG SF variants
- Follow-up: Refer to appropriate specialist (oncology, cardiology)
Issue #3: Algorithmic Bias and Health Equity
- Risk: AI may underperform in minority populations (as addressed in Fix 2.2)
- Mitigation: Fairness analysis, bias mitigation, diverse recruitment
- Equity Plan: Low-resource sites participate via resource-sharing (central genomics core)
Step 2: FDA Risk Management (ISO 14971) (2-3 pages)
Hazard Analysis:
| Hazard | Severity | Likelihood | Risk Level | Mitigation |
|---|---|---|---|---|
| False Positive | Moderate (family anxiety, unnecessary intervention) | Medium (10-15% expected) | Medium | Pre-test counseling, confirmatory ADOS-2 at 24 months |
| False Negative | High (missed diagnosis, delayed treatment) | Low (5-10% expected) | Medium-High | Longitudinal monitoring (catch late-onset ASD), secondary screening at 18-24 months |
| Model Bias | High (underperforms in minorities → health inequity) | Medium (if not mitigated) | High | Fairness analysis (Fix 2.2), diverse recruitment, bias mitigation algorithms |
| Data Breach | High (genomic data leak → re-identification) | Low (strong encryption, federated learning) | Medium | Differential privacy, homomorphic encryption, HIPAA compliance, cybersecurity audits |
| Algorithm Drift | Moderate (performance degrades over time as population shifts) | Medium (without monitoring) | Medium | Post-market surveillance (continuous performance monitoring), model updates |
Risk Mitigation Strategies:
- Design Controls: Fairness constraints in model training, differential privacy (ε=1.0)
- Verification/Validation: pRCT (n=500, 10 sites), external validation (50 sites)
- Post-Market Surveillance: Real-world performance monitoring (quarterly reports to FDA)
Step 3: Post-Market Surveillance Plan (1-2 pages)
FDA Requirement: Continuous monitoring of AI/ML device performance post-deployment
Plan:
- Data Collection: All clinical deployments report outcomes (predicted diagnosis vs. ADOS-2 gold standard)
- Frequency: Quarterly performance reports (sensitivity, specificity, AUC) to FDA
- Thresholds: If sensitivity <90% or specificity <85% for 2 consecutive quarters → trigger investigation
- Model Updates: Algorithm Change Protocol (pre-specified by FDA) allows updates without new clearance (if performance stays within bounds)
Deliverables:
- Ethical framework (2-3 pages)
- ISO 14971 risk management file (2-3 pages)
- Post-market surveillance plan (1-2 pages)
Effort: 2 weeks
Impact: Approach score: +0.3, addresses FDA requirements proactively
TOTAL IMPACT OF PRIORITY 2 FIXES:
Before Priority 2:
- Composite Score: 8.3-8.7/9.0 (after Priority 1)
After Priority 2:
- Composite Score: 8.6-9.0/9.0 (Top 3-5%, excellent positioning)
- Success Probability: 85-95%+
Estimated Total Effort (Priority 2): 4-6 weeks
Definition: These are refinements that elevate a strong proposal to exceptional. Not required for funding, but increase probability of "Outstanding" rating.
Estimated Impact: +0.1 to +0.3 points overall (8.6-9.0 → 8.7-9.0+, capped)
Timeline: 2-3 weeks additional effort
Proposal: Conduct clinician validation of AI explanations (n=10-15 child psychiatrists)
Method:
- Present 20 sample cases (10 ASD, 10 TD) with AI predictions + explanations (attention maps, SHAP values)
- Ask clinicians: "Do you understand the AI's reasoning?" (5-point Likert scale)
- Ask: "Do you agree with the AI's prediction?" (Yes/No/Uncertain)
- Target: ≥70% clinicians rate explanations as "understandable" (4-5 on Likert), ≥80% agree with predictions
Deliverable: 1-2 pages added to "Interpretability" section, 1 figure showing clinician comprehension results
Effort: 2-3 weeks (recruit clinicians, prepare materials, run study, analyze)
Impact: Approach score: +0.2, demonstrates clinical usability
Proposal: Show performance vs. privacy trade-off (ε=0.1, 1.0, 10, ∞)
Method:
- Simulate federated learning on ABIDE with different DP noise levels
- Measure: AUC vs. ε (privacy budget)
- Expected: ε=∞ (no DP): AUC 0.90, ε=10: AUC 0.89, ε=1.0: AUC 0.87, ε=0.1: AUC 0.82
- Recommendation: ε=1.0 balances privacy (strict) and performance (acceptable)
Deliverable: 1 page added to "Federated Learning" section, 1 figure showing AUC vs. ε
Effort: 1 week
Impact: Approach score: +0.1, demonstrates rigor
Proposal: Plan for insurance reimbursement (CPT code application)
Method:
- Identify existing CPT codes that could apply (e.g., 96127 brief emotional/behavioral assessment)
- Propose new CPT code: "AI-assisted autism diagnostic evaluation" (estimated reimbursement $400-600)
- Timeline: Apply to AMA CPT Editorial Panel (Year 6-7, after pRCT results)
- Payer engagement: Pilot contracts with 2-3 large insurers (Blue Cross, Aetna, UnitedHealthcare)
Deliverable: 1-2 pages added to "Commercialization" section
Effort: 1 week
Impact: Impact score: +0.1, demonstrates real-world sustainability
Proposal: Detailed MRI/EEG quality control protocol (following ENIGMA best practices)
Method:
- MRI QC:
- Automated: FreeSurfer QC scripts (Euler number, surface holes, contrast-to-noise ratio)
- Manual: Visual inspection of 10% random sample (trained raters, inter-rater reliability κ≥0.80)
- Exclusion criteria: Motion >3mm translation, severe artifacts, failed segmentation
- EEG QC:
- Automated: Artifact detection (eye blinks, muscle artifacts), signal-to-noise ratio
- Manual: Visual inspection of event-related potentials (N170, ERN)
- Exclusion criteria: >30% trials rejected, low SNR (<3 dB)
Deliverable: 2-3 pages added to "Data Quality Assurance" section
Effort: 1 week
Impact: Approach score: +0.1, demonstrates rigor
TOTAL IMPACT OF PRIORITY 3 FIXES:
Before Priority 3:
- Composite Score: 8.6-9.0/9.0 (after Priority 1 + 2)
After Priority 3:
- Composite Score: 8.7-9.0/9.0 (Top 1-3%, exceptional)
- Success Probability: 90-95%+
Estimated Total Effort (Priority 3): 3-4 weeks
Week 1-2: Priority 1A (Investigators)
- Identify and recruit PI + Co-Is (6 investigators)
- Draft biosketches (5 pages each)
- Output: Named investigators + CVs
Week 3-4: Priority 1B (Preliminary Data)
- Collect preliminary data (pilot multimodal fusion OR FL simulation OR LoRA fine-tuning)
- Output: 5-10 page preliminary data report
Week 5-6: Priority 1C (INCITE Model)
- Clarify NeuroX-Fusion 130B status (cite + validate OR add pre-training aim)
- Secure INCITE allocation letter (or confirm model access)
- Output: Model validation OR new Specific Aim 1
Week 7: Priority 1D (Site Coordination)
- Write site recruitment/retention/governance plans
- Develop detailed budget breakdown
- Output: 8-10 pages of operational details
Week 8: Priority 2 (High-Impact Fixes)
- Add cluster power analysis
- Add fairness analysis plan
- Add missing modality analysis
- Revise impact projections
- Add ethics + risk management
- Output: 10-15 pages of additional content
Week 9-10 (Optional): Priority 3 (Polish)
- Interpretability validation
- DP sensitivity
- CPT code plan
- QC plan
- Output: 5-8 pages of refinements
Week 11-12 (Optional): Final Assembly
- Integrate all fixes into main proposal
- Proofread, format, check page limits
- Solicit external feedback (advisory board, mock review)
- Output: Final polished proposal
- Name investigators with track records (PI + 5 Co-Is, CVs, preliminary data, letters of support)
- Clarify INCITE model status (cite + validate OR add pre-training aim + fallback)
- Add 50-site coordination plan (recruitment, retention, governance, budget breakdown)
Timeline: 6-8 weeks Impact: 7.8 → 8.3-8.7 (Top 5-8%)
- Cluster-adjusted power analysis
- Algorithmic fairness plan
- Missing modality analysis
- Revise impact projections (conservative)
- Ethics + risk management + post-market surveillance
Timeline: +4 weeks (total 10-12 weeks) Impact: 8.3-8.7 → 8.6-9.0 (Top 3-5%)
- Interpretability validation
- DP sensitivity analysis
- CPT code reimbursement plan
- MRI/EEG QC plan
Timeline: +3 weeks (total 13-15 weeks) Impact: 8.6-9.0 → 8.7-9.0+ (Top 1-3%, exceptional)
Current State:
- This proposal has outstanding scientific foundations (innovation, rigor, impact)
- It is currently in top 10-15% tier (fundable, but not top 5%)
- 3 critical barriers prevent top 5% success: Investigators, INCITE model, site coordination
With Focused Effort:
- 6-8 weeks of dedicated work on Priority 1 → Top 5-8% tier
- 10-12 weeks total with Priority 1+2 → Top 3-5% tier
- 13-15 weeks total with all priorities → Top 1-3% tier (exceptional)
This proposal can become FIELD-DEFINING with the right team, preliminary data, and operational details.
The science is revolutionary. The execution plan needs strengthening. You have a clear roadmap to top 5% success.
Recommended Next Steps:
- Week 1: Identify PI candidate (reach out to ENIGMA/ABIDE leaders)
- Week 2: Recruit Co-Investigators (send invitations with 1-page project summary)
- Week 3-4: Collect preliminary data (even n=50 pilot shows commitment)
- Week 5-8: Implement all Priority 1 fixes
- Week 9-12: Implement Priority 2 fixes
- Week 13: Submit exceptional proposal
You have the potential for a revolutionary, field-defining grant. Execute this roadmap, and success is highly likely.