• Optional
best_of_sequences: TextGenerationStreamBestOfSequence
[]
Additional sequences when using the best_of
parameter
inference/src/tasks/nlp/textGenerationStream.ts:67
• finish_reason: TextGenerationStreamFinishReason
Generation finish reason
inference/src/tasks/nlp/textGenerationStream.ts:57
• generated_tokens: number
Number of generated tokens
inference/src/tasks/nlp/textGenerationStream.ts:59
• prefill: TextGenerationStreamPrefillToken
[]
Prompt tokens
inference/src/tasks/nlp/textGenerationStream.ts:63
• Optional
seed: number
Sampling seed if sampling was activated
inference/src/tasks/nlp/textGenerationStream.ts:61
• tokens: TextGenerationStreamToken
[]