mlburnham/polnli_hypothesis_stability
Viewer
•
Updated
•
8.98k
•
11
Any plans to train a version of your zero-shot models on ModernBERT? I'm finding that ModernBERT is a huge boost in speed, and a slight drop in performance vs. DeBERTa when I tune it. Not sure if the performance drop is because your zero-shot models were such a strong foundation for transfer learning, or the strength of DeBERTa architecture on NLI.