Skip to content

Create llava-plus-multimodal-tool-use.md #30

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

clumsypanda-web
Copy link

This PR adds LLaVA-Plus, a significant advancement in multimodal AI that introduces:

  • First visual instruction dataset specifically for multimodal tool use
  • Novel approach to dynamic tool/skill integration in multimodal models
  • State-of-the-art performance across multiple benchmarks
  • Complete reproducibility with public code, data, and checkpoints

The resource includes:

  • Paper link and implementation details
  • Original analysis of technical significance
  • Code examples demonstrating core concepts
  • Proper categorization within the multimodal section

Related Links:

This PR adds LLaVA-Plus, a significant advancement in multimodal AI that introduces:
- First visual instruction dataset specifically for multimodal tool use
- Novel approach to dynamic tool/skill integration in multimodal models
- State-of-the-art performance across multiple benchmarks
- Complete reproducibility with public code, data, and checkpoints

The resource includes:
- Paper link and implementation details
- Original analysis of technical significance
- Code examples demonstrating core concepts
- Proper categorization within the multimodal section

Related Links:
- Paper: https://arxiv.org/abs/2311.05437
- Code: https://github.com/LLaVA-VL/LLaVA-Plus-Codebase
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant