OceanCLIP-0.15B: Marine Vision-Language Model

A vision-language model fine-tuned on marine imagery and biological terminology using the OpenCLIP framework. Built upon BioCLIP, it is optimized for marine species identification, zero-shot classification, and cross-validation in underwater/sonar environments.

📂 Repository Contents

Directory File Description
oceanclip-bio/ epoch_50.pt Fine-tuned checkpoint. Marine-adapted weights after 50 training epochs. Contains the updated vision & text encoder projections.
oceanclip-bio/ terms.txt Marine terminology list. Line-by-line species names (e.g., A abramis). Used for zero-shot classification to dynamically build class-specific text prompts.
bioclip/ open_clip_config.json Architecture & preprocessing config. Defines ViT-B/16 vision encoder, Transformer text encoder (77 context, 512 width), and image normalization (mean/std).
bioclip/ open_clip_pytorch_model.bin Base BioCLIP weights. Original OpenCLIP-format pre-trained weights. Serves as the initialization backbone before marine-specific fine-tuning.

🚀 Usage

Requires open_clip_torch and torch.

import open_clip
import torch
from PIL import Image

# 1. Load architecture & base weights
model, _, preprocess = open_clip.create_model_and_transforms(
    model_name="ViT-B-16",
    pretrained="bioclip/open_clip_pytorch_model.bin"
)
tokenizer = open_clip.get_tokenizer("ViT-B-16")

# 2. Load fine-tuned marine weights
state_dict = torch.load("oceanclip-bio/epoch_50.pt", map_location="cpu")
model.load_state_dict(state_dict, strict=False)
model.eval()

# 3. Inference (Zero-Shot with terms.txt)
image = preprocess(Image.open("marine_input.jpg")).unsqueeze(0)
terms = [line.strip() for line in open("oceanclip-bio/terms.txt", "r") if line.strip()]
text_tokens = tokenizer(terms)

with torch.no_grad():
    image_feat = model.encode_image(image)
    text_feat = model.encode_text(text_tokens)
    logits = (image_feat @ text_feat.T).softmax(dim=-1)
    
    top_idx = logits.argmax().item()
    print(f"Predicted species: {terms[top_idx]} (Confidence: {logits[0, top_idx]:.4f})")
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including zjunlp/OceanCLIP-0.15B