Across Metrics and Prompts, Frequent Concepts Outperform in Zero-Shot Learning | HackerNoon
The strong log-linear trend between concept frequency and zero-shot performance consistently holds across different prompting strategies, indicating that more frequently encountered concepts in pretraining data yield better performance.
How AI Models Count and Match Concepts in Images and Text | HackerNoon
We define 'concepts' as the specific objects or class categories we seek to analyze in the pretraining datasets, such as the 1,000 classes in ImageNet.