I am trying to use a model for text generation with the inference endpoint with Javascript but currently my responses are just being cut off
const text = ' """<|fim▁begin|> def fibonnaci(): <|fim▁hole|> <|fim▁end|> """'
const { generated_text } = await gpt2.textGeneration({ inputs: `${text}`, max_new_tokens: 500})
console.log(generated_text)
Below is the output I get, how do I increase the size returned I assumed it was to increase max_new_tokens. Any help appreciated
fibonnaci = [0, 1]
while len(fib```