Home / Companies / Together AI / Blog / Post Details
Content Deep Dive

What do LLMs think when you don't tell them what to think about?

Blog post from Together AI

Post Details
Company
Date Published
Author
Yongchan Kwon and James Zou
Word Count
1,143
Language
English
Hacker News Points
-
Summary

Research into the behavior of large language models (LLMs) reveals that near-unconstrained generation exposes unique insights into their innate preferences and biases, which are not apparent when they are constrained by specific prompts or templates. By using open-ended, topic-neutral seed prompts, researchers observed that different model families exhibit distinct semantic tendencies and levels of content complexity, with some models like GPT-OSS defaulting to advanced programming and mathematics, while others, such as Qwen, produce multiple-choice questions. The study also highlights that models tend to generate repetitive or degenerate text, which can serve as indicators of safety and privacy risks. These findings reveal systematic patterns that persist across various setups, suggesting that understanding LLMs requires examining their default generative behaviors alongside standard benchmark evaluations.