#ollama

[ follow ]
fromThe Hacker News
1 day ago

Researchers Find 175,000 Publicly Exposed Ollama AI Servers Across 130 Countries

While the service binds to the localhost address at 127.0.0[.]1:11434 by default, it's possible to expose it to the public internet by means of a trivial change: configuring it to bind to 0.0.0[.]0 or a public interface.
Artificial intelligence
#local-llms
fromZDNET
2 days ago
Software development

I tried local AI on my M1 Mac, and the experience was brutal - here's why

fromZDNET
2 days ago
Software development

I tried local AI on my M1 Mac, and the experience was brutal - here's why

Artificial intelligence
fromCaktusgroup
1 month ago

PydanticAI Agents Intro | Caktus Group

PydanticAI Agents reduce boilerplate by registering functions directly, automating request/response loops and enabling easy backend model swapping such as Ollama.
Python
fromCaktusgroup
1 month ago

LLM Basics: Ollama Function Calling | Caktus Group

Use Ollama's local Llama 3.2 model to perform function calling by providing function schemas or Python functions and executing returned tool calls.
Artificial intelligence
fromThe Verge
2 months ago

Microsoft's AI-powered copy and paste can now use on-device AI

PowerToys Advanced Paste now uses on-device AI (Foundry Local or Ollama) to run models on NPUs, avoiding cloud API costs and keeping data local.
#local-ai
fromZDNET
2 months ago
Gadgets

I got tired of Windows 11, so I converted this Mini PC into a Linux powerhouse - here's how

fromZDNET
2 months ago
Gadgets

This Windows PC could easily replace my Mac Mini when it comes to local AI performance

fromZDNET
4 months ago
Artificial intelligence

5 reasons I use local AI on my desktop - instead of ChatGPT, Gemini, or Claude

fromZDNET
2 months ago
Gadgets

I got tired of Windows 11, so I converted this Mini PC into a Linux powerhouse - here's how

fromZDNET
2 months ago
Gadgets

This Windows PC could easily replace my Mac Mini when it comes to local AI performance

fromZDNET
4 months ago
Artificial intelligence

5 reasons I use local AI on my desktop - instead of ChatGPT, Gemini, or Claude

fromTheregister
4 months ago

AI can't stop the sprint to adopt hot tech without security

Ollama provides a framework that makes it possible to run large language models locally, on a desktop machine or server. Cisco decided to research it because, in the words of Senior Incident Response Architect Dr. Giannis Tziakouris, Ollama has "gained popularity for its ease of use and local deployment capabilities." Talos researchers used the Shodan scanning tool to find unsecured Ollama servers, and spotted over 1,100, around 20 percent of which are "actively hosting models susceptible to unauthorized access."
Information security
#ai
fromZDNET
5 months ago
Artificial intelligence

My go-to LLM tool just dropped a super simple Mac and PC app for local AI - why you should try it

Ollama AI has released a native GUI for MacOS and Windows, simplifying local AI usage.
fromLogRocket Blog
8 months ago
Artificial intelligence

Building an agentic AI workflow with Ollama and React - LogRocket Blog

Local Large Language Models provide cost-effective, reliable, and private AI solutions.
Ollama enables easy access to powerful open source models for developers.
fromZDNET
5 months ago
Artificial intelligence

My go-to LLM tool just dropped a super simple Mac and PC app for local AI - why you should try it

[ Load more ]