Linthicum has also taken serverless to task. "Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds," he says. Why? Because these "introduced more nuanced solutions to the market with tailored approaches that cater to specific business needs rather than the one-size-fits-all of serverless computing." I once suggested that serverless might displace Kubernetes and containers. I was wrong. Linthicum's more measured approach feels correct because it follows what always seems to happen with big new trends: They don't completely crater, they just stop pretending to solve all of our problems and instead get embraced for modest but still important applications.
Dream would be that AWS lets up on the "hey guys we're not behind on AI" panic it's in and focuses on the core infrastructure that it's good at. Judging from the keynote so far it looks like that's wishful thinking.
From a technical standpoint, the solution relies on a lightweight serverless function (such as an AWS Lambda) that receives GitLab webhooks via an API Gateway endpoint, formats the payload as structured logs, and ships them into Grafana Cloud Logs. Users can then use LogQL queries to analyze CI/CD activity by project, deployment success rates, or build times. Furthermore, these logs can be combined with application performance data in Grafana dashboards, for example, seeing error rates plotted alongside specific deploys or code changes.
The Java Virtual Machine (JVM) is a marvel of engineering, optimized for long-running, high-performance applications. Its just-in-time (JIT) compiler analyzes code as it runs, making sophisticated optimizations to deliver incredible peak performance. But this strength becomes a weakness in a serverless model. When a Lambda function starts cold, the JVM must go through its entire initialization process: loading classes, verifying bytecode and beginning the slow warm-up of the JIT compiler. This can take several seconds - an eternity for a latency-sensitive workflow.
IBM Cloud Code Engine, the company's fully managed, strategic serverless platform, has introduced Serverless Fleets with integrated GPU support. With this new capability, the company directly addresses the challenge of running large-scale, compute-intensive workloads such as enterprise AI, generative AI, machine learning, and complex simulations on a simplified, pay-as-you-go serverless model. Historically, as noted in academic papers, including a recent Cornell University paper, serverless technology struggled to efficiently support these demanding, parallel workloads,