
"If we make a system of autonomous agents, we need lots of trust between agents. If I delegate decisions to an AI, I then have to trust it, and if that AI relies on other AIs, it must trust them. Therefore we will need to develop a very robust trust system that can detect, verify, and generate trust between humans and machines, and more importantly between machines and machines."
"Applicable research in trust follows two directions: understanding better how humans trust each other, and applying some of those principles in an abstract way into mechanical systems. Technologists have already created primitive trust systems to manage the security of data clouds and communications. For instance, should this device be allowed to connect? Can it be trusted to do what it claims it can do? How do we verify its identity, and its behavior? And so on."
"Today when I am shopping for an AI, accuracy is the primary quality I am looking for. Will it give me correct answers? How much does it hallucinate? These qualities are proxies for trust. Can I trust the AI to give me an answer that is reliable? As AIs start to do more, to go out into the world to act, to make decisions for us, their trustworthiness becomes crucial."
Wherever there is autonomy, trust must follow. Delegating decisions to AI requires trusting those systems and the other AIs they rely on. Research on trust proceeds two ways: understanding human trust and abstracting those principles into mechanical systems. Current technical trust solutions manage identity, security, and device permissions but are primitive relative to adaptive agents. Adaptive agents have fluid, opaque, and shifting behaviors and identities, making trust more difficult and consequential. Accuracy and hallucination rates act as proxies for trust today, but trust will be unbundled into security, reliability, responsibility, and accountability and will need precise measurement and synthesis.
Read at The Technium
Unable to calculate read time
Collection
[
|
...
]