Grounded-Segment-Anything AI technology page Top Builders
Explore the top contributors showcasing the highest number of Grounded-Segment-Anything AI technology page app submissions within our community.
Grounded-Segment-Anything
Grounded-Segment-Anything is a framework that combines Grounding DINO and Segment Anything to detect and segment objects in images using text prompts. The project also incorporates other models like Stable-Diffusion, Tag2Text, and BLIP for various tasks like image generation and automatic labeling.
General | |
---|---|
Release date | March 31, 2023 |
Repository | https://github.com/IDEA-Research/Grounded-Segment-Anything |
Type | Image Segmentation and Detection |
🔥 Highlighted Projects
- Checkout the Automated Dataset Annotation and Evaluation with GroundingDINO and SAM which is an amazing tutorial on automatic labeling! Thanks a lot for Piotr Skalski and Robotflow!
- Checkout the Segment Everything Everywhere All at Once demo! It supports segmenting with various types of prompts (text, point, scribble, referring image, etc.) and any combination of prompts.
- Checkout the OpenSeeD for the interactive segmentation with box input to generate mask.
- Visual instruction tuning with GPT-4! Please check out the multimodal model LLaVA: [Project Page] [Paper] [Demo] [Data] [Model]
Grounded-Segment-Anything AI technology page Hackathon projects
Discover innovative solutions crafted with Grounded-Segment-Anything AI technology page, developed by our community members during our engaging hackathons.