#algorithmic-reasoning

[ follow ]
#natural-language-processing
fromHackernoon
1 year ago
Artificial intelligence

Igniting Generative Power: Multi-Token LLMs for Advanced Text Summarization | HackerNoon

Comprehensive evaluation reveals that the 7B parameter models significantly improve summarization tasks when trained on vast amounts of natural language data.
fromhackernoon.com
1 month ago
Artificial intelligence

Limited Gains: Multi-Token Training on Natural Language Choice Tasks

Multi-token prediction enhances model performance in natural language processing benchmarks.
Larger models lead to improved scalability and faster inference times.
fromHackernoon
1 year ago
Artificial intelligence

Igniting Generative Power: Multi-Token LLMs for Advanced Text Summarization | HackerNoon

Artificial intelligence
fromhackernoon.com
1 month ago

Limited Gains: Multi-Token Training on Natural Language Choice Tasks

Multi-token prediction enhances model performance in natural language processing benchmarks.
Larger models lead to improved scalability and faster inference times.
fromHackernoon
7 months ago

Alternative Architectures for Multi-Token Prediction in LLMs | HackerNoon

The proposed architecture shows significant benefits in scalability and performance for multi-token prediction tasks.
#language-models
fromHackernoon
4 months ago
Scala

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning | HackerNoon

fromHackernoon
4 months ago
Scala

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning | HackerNoon

fromHackernoon
4 months ago

How We Curated Seven Algorithmic Reasoning Tasks From Big-Bench Hard | HackerNoon

To evaluate LLMs' reasoning capabilities, we curated seven algorithmic reasoning tasks from Big-Bench Hard designed to measure step-by-step reasoning in zero-shot settings.
Scala
[ Load more ]