I think it may be good idea to also add a separate general medicine-focused community.
Health communities on social media tend to gravitate towards wellness and less-than-evidence-based practices - which is fine on its own, but may not be what some would expect by the name alone.
That's not entirely true.
LLMs are trained to predict next word given context, yes. But in order to do that, they develop internal model that minimizes error across wide range of contexts - and emergent feature of this process is that the model DOES perform more than pure compression of the training data.
For example, GPT-3 is able to calculate addition and subtraction problems that didn't appear in the training dataset. This would suggest that the model learned how to perform addition and subtraction, likely because it was easier or more efficient than storing all of the examples from the training data separately.
This is a simple to measure example, but it's enough to suggests that LLMs are able to extrapolate from the training data and perform more than just stitch relevant parts of the dataset together.