Libra Model Card

Paper and Resources for More Information

For further details about Libra, including its architecture, training process, and use cases, please refer to the following resources:

  • Project Website: Libra v1.0
  • Article: Comprehensive paper describing Libra’s design and experiments arXiv:2411.19378
  • Code Repository: Open-source implementation and pre-trained models (GitHub: X-iZhang/Libra)

Core Components:

  • RAD-DINO: Vision encoder pre-trained on medical imaging datasets for robust image feature extraction.
  • Meditron-7B: A large language model specialised in medical text generation, based on Llama-2.
  • Temporal Alignment Connector (TAC): Custom-designed adapter for integrating temporal information between current and prior chest X-rays.

Training Strategy:

Two-stage training process:

  1. Temporal feature alignment.
  2. Fine-tuning on the radiology report generation task.

Primary Use Case:

Generates detailed findings and impressions sections for chest X-ray reports, incorporating temporal comparisons.

Downloads last month
260
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for X-iZhang/libra-v1.0-7b

Merge model
this model

Space using X-iZhang/libra-v1.0-7b 1

Collection including X-iZhang/libra-v1.0-7b