#llm-training

[ follow ]
fromComputerworld
1 week ago

Anthropic's Claude AI gets a new constitution embedding safety and ethics

Anthropic has completely overhauled the "Claude constitution", a document that sets out the ethical parameters governing its AI model's reasoning and behavior. Launched at the World Economic Forum's Davos Summit, the new constitution's principles are that Claude should be "broadly safe" (not undermining human oversight), "Broadly ethical" (honest, avoiding inappropriate, dangerous, or harmful actions), "genuinely helpful" (benefitting its users), as well as being "compliant with Anthropic's guidelines".
Artificial intelligence
Artificial intelligence
fromThe Atlantic
2 months ago

The Nonprofit Feeding the Entire Internet to AI Companies

Common Crawl archived paywalled journalism and made it accessible, enabling major AI companies to train large language models without paying publishers.
Privacy technologies
fromZDNET
5 months ago

You should try Gemini's new 'incognito' chat mode - here's why and what it does

Google adds Temporary Chats in Gemini that vanish after 72 hours and are excluded from personalization and model training.
Intellectual property law
fromHackernoon
1 year ago

Judge Finds AI Training on Complete Books 'Reasonably Necessary' | HackerNoon

The amount and substantiality of the portion used in copying is judged by its reasonableness for transformative purposes.
Intellectual property law
fromHackernoon
1 year ago

Anthropic Admits to Copying Books en masse for Claude-Can Fair Use Save It? | HackerNoon

Anthropic used multiple methods to copy and prepare works for training their LLM, including cleaning, tokenization, and retaining compressed copies.
Scala
fromHackernoon
1 year ago

Why 4-Bit Quantization Is the Sweet Spot for Code LLMs | HackerNoon

4-bit integer quantization best balances model performance and size, outperforming half-precision models.
2-bit quantization significantly degrades performance, leading to incoherent responses.
[ Load more ]