The request for its API val request = Request[IO](Method.POST, uri"/jobs")val api = new AsyncJobApi // this will not compile since AsyncJobApi is not defined yet Minimal implementation to make it green: class AsyncJobApi Red test: The API should return a 202 Accepted response: "POST /jobs returns Accepted" in { val request = Request[IO](Method.POST, uri"/jobs") val api = new AsyncJobApi api.routes.orNotFound.run(request).asserting : response => response.status shouldBe Status.Accepted} Make it green: class AsyncJobApi { val routes: HttpRoutes[IO] = HttpRoutes.of[IO] : case req @ POST -> Root / "jobs" => Accepted()} 5.2 Add headers (Trivial Implementation) Red test: add X-Total-Count and Location headers with job ID (only the assertion is shown)
Over the past months, I've watched two clients move from Scala (Play, Slick, Akka, Akka Http ... ) to Kotlin (Spring, JPA/Hibernate). In my current role, an engineering decision was made to move away from Scala. The decision was driven less by Scala's shortcomings and more by long-term career risk management: leaders understandably favor stacks (Java/Kotlin) that maximize hiring flexibility in a volatile market.
High-level view of the travel search workflow, highlighting parallel searches, explicit decision points, and iterative refinement. In Scala, we define this workflow using Workflows4s, encoding both state and transitions explicitly in the type system. Instead of opaque state blobs or untyped contexts, the state of the process is represented using algebraic data types - types like Started, Found, Sent, and Booked - each corresponding to a distinct point in the workflow's lifecycle.
Imagine you're working with a third-party library that provides a User class. You need to add JSON serialization to it, but you can't modify the source code. Of course you can create a wrapper class or extend it, but that feels clunky and breaks existing code that expects the original type. This is where type classes shine. They're one of Scala's most powerful patterns, and they're the secret ingredient in popular libraries like Cats, Scalaz, and Circe.
Photo by Goran Ivos on Unsplash We're excited to announce that Scala 2.13 is now in Public Preview (PuPr) on Snowpark for Scala client, UDxF and Stored Procedures! This release brings the massive collections overhaul, performance improvements, and powerful language enhancements of Scala 2.13 to the Snowflake AI Data Cloud. Where Can You Use Scala 2.13 in Snowflake? Snowflake SQL Snowpark Scala client library Why Upgrade?
The spark-sql-perf toolkit doesn't work for Spark 4.0+ currently, and this guide shows you how to get it running(with a custom patch). While many developers have their own complex Spark setup, this workflow is designed to be simple and reproducible. It only requires an AWS account to provision a cluster and run a full benchmark from scratch. We'll focus on the patch, the build process, and how a tool like FlintRock makes deploying custom Spark clusters incredibly simple.
If you've worked with big data long enough, you know that the smallest syntax differences can have massive performance or logic implications.That's especially true when working in Spark with Scala, where functional transformations like map and flatMap control how data moves, expands, or contracts across clusters.