hanxiao commited on
Commit
c79aa4b
·
verified ·
1 Parent(s): af99331

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -53,6 +53,16 @@ library_name: transformers
53
  Below, you will find instructions and examples for using `ReaderLM-v2` locally using the Hugging Face Transformers library.
54
  For a more hands-on experience in a hosted environment, see the [Google Colab Notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing).
55
 
 
 
 
 
 
 
 
 
 
 
56
  ## On Google Colab
57
 
58
  The easiest way to experience `ReaderLM-v2` is through our [Colab notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing), which demonstrates HTML-to-markdown conversion, JSON extraction, and instruction-following using the HackerNews frontpage as an example. The notebook is optimized for Colab's free T4 GPU tier and requires `vllm` and `triton` for acceleration and running.
 
53
  Below, you will find instructions and examples for using `ReaderLM-v2` locally using the Hugging Face Transformers library.
54
  For a more hands-on experience in a hosted environment, see the [Google Colab Notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing).
55
 
56
+ ## Via Reader API
57
+
58
+ `ReaderLM-v2` is now fully integrated with [Reader API](https://jina.ai/reader/). To use it, simply specify `x-engine: readerlm-v2` in your request headers and enable response streaming with `-H 'Accept: text/event-stream'`:
59
+
60
+ ```bash
61
+ curl https://r.jina.ai/https://news.ycombinator.com/ -H 'x-engine: readerlm-v2' -H 'Accept: text/event-stream'
62
+ ```
63
+
64
+ You can try it without an API key at a lower rate limit. For higher rate limits, you can purchase an API key. Please note that ReaderLM-v2 requests consume 3x the normal token count from your API key allocation. This is currently an experimental feature, and we're working with the GCP team to improve GPU efficiency.
65
+
66
  ## On Google Colab
67
 
68
  The easiest way to experience `ReaderLM-v2` is through our [Colab notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing), which demonstrates HTML-to-markdown conversion, JSON extraction, and instruction-following using the HackerNews frontpage as an example. The notebook is optimized for Colab's free T4 GPU tier and requires `vllm` and `triton` for acceleration and running.