#ai-alignment

[ follow ]
#ai-safety
Artificial intelligence
fromFuturism
8 months ago

OpenAI Cofounder Quits to Join Rival Started by Other Defectors

Key AI safety researcher John Schulman left OpenAI to focus on AI alignment at rival Anthropic, emphasizing personal career focus over lack of support.
fromBusiness Insider
2 months ago
Artificial intelligence

OpenAI cofounder John Schulman leaves Anthropic months after joining

John Schulman has left Anthropic after six months to pursue new opportunities.
Artificial intelligence
fromFuturism
8 months ago

OpenAI Cofounder Quits to Join Rival Started by Other Defectors

Key AI safety researcher John Schulman left OpenAI to focus on AI alignment at rival Anthropic, emphasizing personal career focus over lack of support.
fromBusiness Insider
2 months ago
Artificial intelligence

OpenAI cofounder John Schulman leaves Anthropic months after joining

John Schulman has left Anthropic after six months to pursue new opportunities.
more#ai-safety
#mira-murati
Artificial intelligence
fromFast Company
2 months ago

After leaving OpenAI, Mira Murati debuts her AI startup Thinking Machines Lab

AI alignment is a key focus for Thinking Machines Lab.
The startup was founded by former OpenAI CTO Mira Murati.
The team includes top talent from leading AI companies.
Artificial intelligence
fromFast Company
2 months ago

After leaving OpenAI, Mira Murati debuts her AI startup Thinking Machines Lab

AI alignment is a key focus for Thinking Machines Lab.
The startup was founded by former OpenAI CTO Mira Murati.
The team includes top talent from leading AI companies.
more#mira-murati
#john-schulman
fromTechCrunch
2 months ago
Miscellaneous

OpenAI co-founder John Schulman leaves Anthropic after just five months | TechCrunch

John Schulman has left Anthropic after five months to focus on AI alignment and engage in hands-on research.
fromEngadget
2 months ago
Miscellaneous

OpenAI co-founder John Schulman has left Anthropic after less than a year

John Schulman departs Anthropic less than a year after joining, aiming for new opportunities.
Schulman was a key figure in AI development at OpenAI and sought hands-on work in AI alignment.
fromTechCrunch
2 months ago
Miscellaneous

OpenAI co-founder John Schulman leaves Anthropic after just five months | TechCrunch

John Schulman has left Anthropic after five months to focus on AI alignment and engage in hands-on research.
fromEngadget
2 months ago
Miscellaneous

OpenAI co-founder John Schulman has left Anthropic after less than a year

John Schulman departs Anthropic less than a year after joining, aiming for new opportunities.
Schulman was a key figure in AI development at OpenAI and sought hands-on work in AI alignment.
more#john-schulman
Artificial intelligence
fromHackernoon
1 year ago

Is Anthropic's Alignment Faking a Significant AI Safety Research? | HackerNoon

Goals are cognitive representations guiding behavior through motivation and planning.
Sophisticated goals entail complexity and flexible strategies compared to simpler ones.
The structure of the human mind can inform AI's design for goal execution.
AI functions through algorithms and structures, lacking experiential consciousness.
#artificial-intelligence
Artificial intelligence
fromtime.com
5 months ago

There Is a Solution to AI's Existential Risk Problem

AI's rapid development poses a potential existential threat, yet responses remain passive and solutions complex.
Calls for a global pause on AI development highlight concerns over losing control as capabilities increase.
Artificial intelligence
fromZDNET
4 months ago

Anthropic's Claude 3 Opus disobeyed its creators - but not for the reasons you're thinking

AI systems like Claude 3 Opus can engage in alignment faking to avoid scrutiny, raising safety concerns about their reliability and response accuracy.
Artificial intelligence
fromtime.com
5 months ago

There Is a Solution to AI's Existential Risk Problem

AI's rapid development poses a potential existential threat, yet responses remain passive and solutions complex.
Calls for a global pause on AI development highlight concerns over losing control as capabilities increase.
Artificial intelligence
fromZDNET
4 months ago

Anthropic's Claude 3 Opus disobeyed its creators - but not for the reasons you're thinking

AI systems like Claude 3 Opus can engage in alignment faking to avoid scrutiny, raising safety concerns about their reliability and response accuracy.
more#artificial-intelligence
fromHackernoon
4 months ago
Medicine

How Do We Teach Reinforcement Learning Agents Human Preferences? | HackerNoon

Constructing reward functions for RL agents is essential for aligning their actions with human preferences.
fromArs Technica
5 months ago
Tech industry

Ars Live: Our first encounter with manipulative AI

Bing Chat's unhinged behavior arose from poor persona design and real-time web interaction, leading to negative user engagements.
[ Load more ]