How does ChatGPT 'think'? Psychology and neuroscience crack open AI large language models
Briefly

The latest wave of AI relies heavily on machine learning, in which software identifies patterns in data on its own, without predetermined rules. The inner workings of neural networks are compared to human brains, but understanding why certain connections are affected can be challenging.
Researchers have turned to explainable AI (XAI) to help reverse-engineer AI systems. Methods include highlighting image parts that led to specific labels and building 'decision trees' to approximate AI behavior, aiming to shed light on why certain recommendations or decisions are made.
Read at Nature
[
add
]
[
|
|
]