|
|
2 semanas atrás | |
|---|---|---|
| .. | ||
| configs | 2 semanas atrás | |
| data | 2 semanas atrás | |
| models | 2 semanas atrás | |
| transform | 2 semanas atrás | |
| CODEOWNERS | 2 semanas atrás | |
| CODE_OF_CONDUCT.md | 2 semanas atrás | |
| LICENSE.txt | 2 semanas atrás | |
| README.md | 2 semanas atrás | |
| SECURITY.md | 2 semanas atrás | |
| __init__.py | 2 semanas atrás | |
| cog.yaml | 2 semanas atrás | |
| demo.ipynb | 2 semanas atrás | |
| eval_nocaps.py | 2 semanas atrás | |
| eval_retrieval_video.py | 2 semanas atrás | |
| predict.py | 2 semanas atrás | |
| pretrain.py | 2 semanas atrás | |
| requirements.txt | 2 semanas atrás | |
| train_caption.py | 2 semanas atrás | |
| train_nlvr.py | 2 semanas atrás | |
| train_retrieval.py | 2 semanas atrás | |
| train_vqa.py | 2 semanas atrás | |
| utils.py | 2 semanas atrás | |
This is the PyTorch code of the BLIP paper [blog]. The code has been tested on PyTorch 1.10. To install the dependencies, run
pip install -r requirements.txtCatalog:
Run our interactive demo using Colab notebook (no GPU needed). The demo includes code for:
Try out the Web demo, integrated into Huggingface Spaces 🤗 using Gradio.
Replicate web demo and Docker image is also available at
| Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L |
|---|---|---|---|
| 14M | Download | - | - |
| 129M | Download | Download | Download |
| Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L |
|---|---|---|---|
| Image-Text Retrieval (COCO) | Download | - | Download |
| Image-Text Retrieval (Flickr30k) | Download | - | Download |
| Image Captioning (COCO) | - | Download | Download |
| VQA | Download | Download | - |
| NLVR2 | Download | - | - |
python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco \ --evaluate
python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco
python -m torch.distributed.run --nproc_per_node=8 train_caption.py --evaluate
python -m torch.distributed.run --nproc_per_node=8 eval_nocaps.py
python -m torch.distributed.run --nproc_per_node=8 train_caption.py
python -m torch.distributed.run --nproc_per_node=8 train_vqa.py --evaluate
python -m torch.distributed.run --nproc_per_node=16 train_vqa.py
python -m torch.distributed.run --nproc_per_node=8 train_nlvr.py --evaluate
python -m torch.distributed.run --nproc_per_node=16 train_nlvr.py
In order to finetune a model with ViT-L, simply change the config file to set 'vit' as large. Batch size and learning rate may also need to be adjusted accordingly (please see the paper's appendix for hyper-parameter details). Gradient checkpoint can also be activated in the config file to reduce GPU memory usage.
python -m torch.distributed.run --nproc_per_node=8 pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain
pip install decord
python -m torch.distributed.run --nproc_per_node=8 eval_retrieval_video.py
We provide bootstrapped pre-training datasets as json files. Each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'url': url_of_image, 'caption': text_of_image}.
| Image source | Filtered web caption | Filtered synthetic caption by ViT-B | Filtered synthetic caption by ViT-L |
|---|---|---|---|
| CC3M+CC12M+SBU | Download | Download | Download |
| LAION115M | Download | Download | Download |
If you find this code to be useful for your research, please consider citing.
@inproceedings{li2022blip,
title={BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
author={Junnan Li and Dongxu Li and Caiming Xiong and Steven Hoi},
year={2022},
booktitle={ICML},
}
Acknowledgement
The implementation of BLIP relies on resources from ALBEF, Huggingface Transformers, and timm. We thank the original authors for their open-sourcing.