#ai-inference

[ follow ]
#qualcomm
fromThe Verge
5 days ago
Artificial intelligence

Qualcomm is turning parts from cellphone chips into AI chips to rival Nvidia

fromThe Verge
5 days ago
Artificial intelligence

Qualcomm is turning parts from cellphone chips into AI chips to rival Nvidia

fromTheregister
5 days ago

Qualcomm announces AI accelerators and racks they'll run in

a generational leap in efficiency and performance for AI inference workloads by delivering greater than 10x higher effective memory bandwidth and much lower power consumption.
Artificial intelligence
Artificial intelligence
from24/7 Wall St.
2 weeks ago

Oracle Executive Just Gave 50,000 Reasons to Buy AMD Stock Right Now

AMD rapidly became a meaningful AI GPU competitor, gaining 10–15% market share through MI300X performance, hyperscaler partnerships, and a roadmap toward more efficient inference.
Artificial intelligence
fromTechzine Global
2 weeks ago

Intel expands AI portfolio with Crescent Island GPU

Intel's Crescent Island GPU targets AI inference with 160GB LPDDR5X, emphasizing energy efficiency, cost-effectiveness, and air-cooled deployment, with first units due H2 2026.
Artificial intelligence
fromTelecompetitor
1 month ago

123NET Expands Southfield Data Center for AI and High-Density Deployments

123NET expanded Southfield DC1 with a 4 MW high-density GPU colocation, liquid/air cooling, and on-site DET-iX free peering for low-latency AI.
Artificial intelligence
fromFortune
1 month ago

Jensen Huang doesn't care about Sam Altman's AI hype fears: he thinks OpenAI will be the first "multi-trillion dollar hyperscale company" | Fortune

Relentless inference demand from accelerated AI computing will drive a generational shift away from general-purpose computing, positioning OpenAI to become a multitrillion-dollar hyperscale company.
fromSilicon Valley Journals
1 month ago

Baseten raises $150 million to power the future of AI inference

Baseten just pulled in a massive $150 million Series D, vaulting the AI infrastructure startup to a $2.15 billion valuation and cementing its place as one of the most important players in the race to scale inference - the behind-the-scenes compute that makes AI apps actually run. If the last generation of great tech companies was built on the cloud, the next wave is being built on inference. Every time you ask a chatbot a question, generate an image, or tap into an AI-powered workflow, inference is happening under the hood.
Venture
Artificial intelligence
fromFortune
1 month ago

Exclusive: Baseten, AI inference unicorn, raises $150 million at $2.15 billion valuation

Baseten provides inference infrastructure that enables companies to deploy, manage, and scale AI models while rapidly increasing revenue and valuation.
Artificial intelligence
fromInfoWorld
2 months ago

Evolving Kubernetes for generative AI inference

Kubernetes now includes native AI inference features including vLLM support, inference benchmarking, LLM-aware routing, inference gateway extensions, and accelerator scheduling.
#amd
Artificial intelligence
fromInfoQ
5 months ago

Google Enhances LiteRT for Faster On-Device Inference

LiteRT simplifies on-device ML inference with enhanced GPU and NPU support for faster performance and lower power consumption.
fromTechzine Global
5 months ago

Red Hat lays foundation for AI inferencing: Server and llm-d project

AI inferencing is crucial for unlocking the full potential of artificial intelligence, as it enables models to apply learned knowledge to real-world situations.
Artificial intelligence
Artificial intelligence
fromIT Pro
6 months ago

'TPUs just work': Why Google Cloud is betting big on its custom chips

Google's seventh generation TPU, 'Ironwood', aims to lead in AI workload efficiency and cost-effectiveness.
TPUs were developed with a cohesive hardware-software synergy, enhancing their utility for AI applications.
[ Load more ]