Answers to key questions about AI in IT security | Computer Weekly
Briefly

Answers to key questions about AI in IT security | Computer Weekly
"Generative AI is a type of artificial intelligence that is incredibly good at identifying the next most likely token in a complex sequence. This is one reason why it handles human language so well, and why other, earlier iterations of machine learning did not. Human language is extremely complex. GenAI can mimic the qualities of its training data, and most of the most popular models on the market are trained on a lot of human language."
"AI chatbots such as Claude, Gemini, ChatGPT - or the security equivalents, including Microsoft Security Copilot, Google Gemini, Charlotte AI, and Purple AI - are powered by large language models (LLMs). As such, they can respond to open-ended questions, create nuanced language, provide contextually aware replies and adapt to topics, especially security topics, without needing explicit programming for each scenario."
Generative AI excels at predicting the next most likely token in complex sequences, enabling strong performance with human language because models learn from large amounts of human-generated text. Security tools apply GenAI for content creation such as incident summaries and query translation, for knowledge articulation via chatbots to support threat research and documentation, and for behaviour modelling to assist triage and investigation agents. Large language model chatbots can answer open-ended security questions, generate nuanced, context-aware responses, and adapt to diverse topics without explicit programming. Vendors market both current and speculative capabilities, creating confusion about available versus anticipated features.
Read at ComputerWeekly.com
Unable to calculate read time
[
|
]