The demos you are about to see have not yet (nor will they ever...) been approved for launch by OpenAI.
This session must not be recorded.
Everyone watching this stream works for the same organisation.
tokenise("Once upon a")
tokenise(" time")
tokenise("Once upon a" + " time")
tokenise(",")
Run for n cycles
Run until the last n tokens match one of the stop sequences
tokenise("Once upon a")
tokenise("time")
tokenise("Once upon a")
(tokenise(" time"), logprob("96.45%"))
tokenise("Once upon a")
[(tokenise(" time"), logprob("96.45%")),
(tokenise(" Time"), logprob("0.67%")),
(tokenise(" midnight"), logprob("0.31%")), ...]
The extent to which each repeated occurrence of a token, lowers the future probability of that token.
The extent to which any previous occurrence of a token, lowers the future probability of that token.
The likelihood that a lower probability token will have its relative probability increased.
Only consider the top n most likely tokens, having a cumulative probability of p.
0 = The best guess
0.5 = A good guess
1 = Hold my beer...
N.B. It only makes sense to use one of these, the other should be set to 1.
Generate m sequences of n tokens, then select the sequence of tokens with the highest probability.
Expensive...
Before submission, concatenate this string onto the prompt.
E.g. "\nA:"
After receiving a response, concatenate this string onto the completion.
E.g. "\n\nQ:"
~ 25 seconds
~ 50 seconds
~ 15 seconds
~ 15 seconds