#pretraining-data

[ follow ]
#zero-shot-performance
fromHackernoon
1 year ago
Artificial intelligence

What 34 Vision-Language Models Reveal About Multimodal Generalization | HackerNoon

fromHackernoon
1 year ago
Artificial intelligence

AI Models Trained on Synthetic Data Still Follow Concept Frequency Trends | HackerNoon

fromHackernoon
1 year ago
Artificial intelligence

What 34 Vision-Language Models Reveal About Multimodal Generalization | HackerNoon

fromHackernoon
1 year ago
Artificial intelligence

AI Models Trained on Synthetic Data Still Follow Concept Frequency Trends | HackerNoon

fromHackernoon
1 year ago

Across Metrics and Prompts, Frequent Concepts Outperform in Zero-Shot Learning | HackerNoon

The strong log-linear trend between concept frequency and zero-shot performance consistently holds across different prompting strategies, indicating that more frequently encountered concepts in pretraining data yield better performance.
Artificial intelligence
fromHackernoon
1 year ago

How AI Models Count and Match Concepts in Images and Text | HackerNoon

We define 'concepts' as the specific objects or class categories we seek to analyze in the pretraining datasets, such as the 1,000 classes in ImageNet.
Artificial intelligence
Artificial intelligence
fromHackernoon
1 year ago

What 300GB of AI Research Reveals About the True Limits of "Zero-Shot" Intelligence | HackerNoon

Pretraining datasets impact the zero-shot performance of multimodal models through predictable frequency of concepts.
[ Load more ]