#model-evaluation

[ follow ]
#machine-learning
fromHackernoon
1 year ago
Artificial intelligence

Real-World Code Performance: Multi-Token Finetuning on CodeContests | HackerNoon

fromHackernoon
3 months ago
Artificial intelligence

When Smaller is Smarter: How Precision-Tuned AI Cracks Protein Mysteries | HackerNoon

fromHackernoon
1 year ago
Artificial intelligence

Real-World Code Performance: Multi-Token Finetuning on CodeContests | HackerNoon

fromHackernoon
3 months ago
Artificial intelligence

When Smaller is Smarter: How Precision-Tuned AI Cracks Protein Mysteries | HackerNoon

#pretraining-data
fromHackernoon
1 year ago
Artificial intelligence

AI Models Trained on Synthetic Data Still Follow Concept Frequency Trends | HackerNoon

fromHackernoon
1 year ago
Artificial intelligence

'Let It Wag!' and the Limits of Machine Learning on Rare Concepts | HackerNoon

fromHackernoon
1 year ago
Artificial intelligence

AI Models Trained on Synthetic Data Still Follow Concept Frequency Trends | HackerNoon

fromHackernoon
1 year ago
Artificial intelligence

'Let It Wag!' and the Limits of Machine Learning on Rare Concepts | HackerNoon

fromHackernoon
1 year ago

AI Training Data Has a Long-Tail Problem | HackerNoon

Pretraining datasets exhibit a long-tailed distribution of concept frequencies, impacting performance disparities.
Data science
fromHackernoon
2 years ago

Deep Dive into MS MARCO Web Search: Unpacking Dataset Characteristics | HackerNoon

The MS MARCO dataset reveals considerable multilingual disparity and significant data skew, highlighting challenges in model evaluation and training.
Artificial intelligence
fromHackernoon
11 months ago

Evaluating Multimodal Speech Models Across Diverse Audio Tasks | HackerNoon

The study leverages diverse speech datasets to evaluate model performance across various speech tasks and improve generalization capabilities.
fromHackernoon
1 month ago

AI Learns Common Sense from Touch, Not Just Vision | HackerNoon

Model size significantly impacts physical understanding accuracy in task performance for OCTOPI.
Utilizing physical property descriptions enhances the performance of language models in complex understanding tasks.
Data science
fromHackernoon
1 month ago

The Future of Remote Sensing: Few-Shot Learning and Explainable AI | HackerNoon

Few-shot learning techniques for remote sensing enhance model efficiency with limited data, emphasizing the need for explainable AI.
fromhackernoon.com
1 month ago

Limited Gains: Multi-Token Training on Natural Language Choice Tasks

Multi-token prediction enhances model performance in natural language processing benchmarks.
Larger models lead to improved scalability and faster inference times.
Artificial intelligence
fromHackernoon
1 year ago

Behind the Scenes: The Prompts and Tricks That Made Many-Shot ICL Work | HackerNoon

GPT4(V)-Turbo demonstrates variable performance in many-shot ICL, with notable failures to scale effectively under certain conditions.
fromHackernoon
2 months ago

Comparing Chameleon AI to Leading Image-to-Text Models | HackerNoon

In evaluating Chameleon, we focus on tasks requiring text generation conditioned on images, particularly image captioning and visual question-answering, with results grouped by task specificity.
Artificial intelligence
Bootstrapping
fromHackernoon
7 months ago

How Many Glitch Tokens Hide in Popular LLMs? Revelations from Large-Scale Testing | HackerNoon

The study reveals that simple indicators can effectively detect under-trained tokens in language models, improving token prediction accuracy.
#ai-benchmarks
fromMedium
2 months ago
Artificial intelligence

Beyond Benchmarks: Really Evaluating AI

Benchmarks help standardize test sets for AI models, ensuring fair evaluation of performance.
#ai
Software development
fromInfoQ
4 months ago

OpenAI Introduces Software Engineering Benchmark

SWE-Lancer benchmark assesses AI language models on real-world freelance software engineering tasks.
AI models face significant challenges in software engineering despite advancements.
Software development
fromInfoQ
4 months ago

OpenAI Introduces Software Engineering Benchmark

SWE-Lancer benchmark assesses AI language models on real-world freelance software engineering tasks.
AI models face significant challenges in software engineering despite advancements.
fromTechCrunch
3 months ago

OpenAI partner says it had relatively little time to test the company's newest AI models | TechCrunch

This evaluation was conducted in a relatively short time, and we only tested the model with simple agent scaffolds. We expect higher performance [on benchmarks] is possible with more elicitation effort.
Artificial intelligence
Artificial intelligence
fromWIRED
3 months ago

This Tool Probes Frontier AI Models for Lapses in Intelligence

Scale AI's new tool, Scale Evaluation, automates testing of AI models to identify weaknesses and improve performance effectively.
fromHackernoon
1 year ago

Limitations in AI Model Evaluation: Bias, Efficiency, and Human Judgment | HackerNoon

The article presents 12 key aspects for evaluating text-to-image generation models, highlighting the need for continuous research and improvement in assessment metrics.
[ Load more ]