The GPT model has a limit on the inputs it can process, and therefore when the recording's transcript is too long, it produces multiple outputs. To address the limitation, we have introduced advanced prompting features to produce a more desirable result.
There are two approaches to handle long duration recordings:
Condense Transcript
(case 1) Enable "Condense Transcript" toggle to shorten the transcript so that the inputs fit within the model's limitations.
Pro: One output is generated
Con: Some information may be disregarded and absent in the final result.
(case 2) Disable "Condense Transcript" toggle to feed the entire transcript into the model. If the transcript exceeds the model's limitations, multiple outputs will be generated.
Pro: Ideal for Prompt Experts who want to test Grouping Prompts (case 3)
Con: Multiple outputs generated
Grouping Prompt
(case 3) Disable "Condense Transcript" toggle and add a Grouping Prompt to instruct the model to group the outputs and form a final result.
Pro: One output is generated
Con: Some information may be disregarded and absent in the final result.
Example Grouping Prompt
"Merge information across the entire call to avoid repetition and generate a single output."