Workers AI gets a speed boost, batch workload support, more LoRAs, new models, and a refreshed dashboard
Since the launch of Workers AI in September 2023, our mission has been to make inference accessible to everyone.
Over the last few quarters, our Workers AI team has been heads down on improving the quality of our platform, working on various routing improvements, GPU optimizations, and capacity management improvements. Managing a distributed inference platform is not a simple task, but distributed systems are also what we do best. You’ll notice a recurring theme from all these announcements that has always been part of the core Cloudflare ethos — we try to solve problems through clever engineering so that we are able to do more with less.
Today, we’re excited to introduce speculative decoding to bring you faster inference, an asynchronous batch API for large workloads, and expanded LoRA support for more customized responses. Lastly, we’ll be recapping some of our newly added models, updated pricing, and unveiling a new dashboard to round out the usability of the platform.
Speeding up inference by 2-4x with speculative decoding and more
</a>
</div>
<p>We’re excited to roll out speed improvements to models in our catalog, starting with the Llama 3.3 70b model. These improvements include speculative decoding, prefix caching, an updated inference backend, and more. We’ve previously done a technical deep dive on speculative decoding and how we’re making Workers AI faster, which <a href="https://blog.cloudflare.com/making-workers-ai-faster/"><u>you can read about here</u></a>. With these changes, we’ve been able to improve inference times by 2-4x, without any significant change to the quality of answers generated. We’re planning to incorporate these improvements into more models in the future as we release them. Today, we’re starting to roll out these changes so all Workers AI users of <code>@cf/meta/llama-3.3-70b-instruct-fp8-fast</code> will enjoy this automatic speed boost.</p>
<div>
<h3>What is speculative decoding?</h3>
<a href="#what-is-speculative-decoding">
</a>
</div>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Jc5CeeOpTW1LSZ7xeZumY/99ced72a25bdabea276f98c03bc17e27/image3.png" />
</figure><p>The way LLMs work is by generating text by predicting the next token in a sentence given the previous tokens. Typically, an LLM is able to predict a single future token (n+1) with one forward pass through the model. These forward passes can be computationally expensive, since they need to work through all the parameters of a model to generate one token (e.g., 70 billion parameters for Llama 3.3 70b).</p><p>With speculative decoding, we put a small model (known as the draft model) in front of the original model that helps predict n+x future tokens. The draft model generates a subset of candidate tokens, and the original model just has to evaluate and confirm if they should be incorporated into the generation. Evaluating tokens is less computationally expensive, as the model can evaluate multiple tokens concurrently in a forward pass. As such, inference times can be sped up by 2-4x — meaning that users can get responses much faster.</p><p>What makes speculative decoding particularly efficient is that it’s able to use unused GPU compute left behind due to the GPU memory bottleneck LLMs create. Speculative decoding takes advantage of the unused compute by squeezing in a draft model to generate tokens faster. This means we’re able to improve the utilization of our GPUs by using them to their full extent without having parts of the GPU sit idle.</p>
<div>
<h3>What is prefix caching?</h3>
<a href="#what-is-prefix-caching">
</a>
</div>
<p>With LLMs, there are usually two stages of generation – the first is known as “pre-fill”, which processes the user’s input tokens such as the prompt and context. Prefix caching is aimed at reducing the pre-fill time of a request. As an example, if you were asking a model to generate code based on a given file, you might insert the whole file into the context window of a request. Then, if you want to make a second request to generate the next line of code, you might send us the whole file again in the second request. Prefix caching allows us to cache the pre-fill tokens so we don’t have to process the context twice. With the same example, we would only do the pre-fill stage once for both requests, rather than doing it per request. This method is especially useful for requests that reuse the same context, such as <a href="https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/"><u>Retrieval Augmented Generation (RAG)</u></a>, code generation, chatbots with memory, and more. Skipping the pre-fill stage for similar requests means faster responses for our users and more efficient usage of resources. </p>
<div>
<h3>How did you validate that quality is preserved through these optimizations?</h3>
<a href="#how-did-you-validate-that-quality-is-preserved-through-these-optimizations">
</a>
</div>
<p>Since this is an in-place update to an existing model, we were particularly cautious in ensuring that we would not break any existing applications with this update. We did extensive A/B testing through a blind arena with internal employees to validate the model quality, and we asked internal and external customers to test the new version of the model to ensure that response formats were compatible and model quality was acceptable. Our testing concluded that the model performed up to standards, with people being extremely excited about the speed of the model. Most LLMs are not perfectly deterministic even with the same set of inputs, but if you do notice something off, please let us know through <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a> or <a href="http://x.com/cloudflaredev"><u>X</u></a>.</p>
<div>
<h2>Asynchronous batch API</h2>
<a href="#asynchronous-batch-api">
</a>
</div>
<p>Next up, we’re announcing an asynchronous (async) batch API which is helpful for users of large workloads. This feature allows customers to receive their inference responses asynchronously, with the promise that the inference will be completed at a later time rather than immediately erroring out due to capacity.</p><p>An example use case of batch workloads is people generating summaries of a large number of documents. You probably don’t need to use those summaries immediately, as you’ll likely use them once the whole document is complete versus one paragraph at a time. For these use cases, we’ve made it super simple for you to start sending us these requests in batches.</p>
<div>
<h3>Why batch requests?</h3>
<a href="#why-batch-requests">
</a>
</div>
<p>From talking to our customers, the most common use case we hear about is people creating embeddings or summarizing a large number of documents. Unfortunately, this is also one of the hardest use cases to manage capacity for as a serverless platform.</p><p>To illustrate this, imagine that you want to summarize a 70 page PDF. You typically chunk the document and then send an inference request for each chunk. If each chunk is a few paragraphs on a page, that means that we receive around 4 requests per page multiplied by 70 pages, which is about 280 requests. Multiply that by tens or hundreds of documents, and multiply that by a handful of concurrent users — this means that we get a sudden massive influx of thousands of requests when users start these large workloads.</p><p>The way we originally built Workers AI was to handle incoming requests as quickly as possible, assuming there's a human on the other side that needed an immediate response. The unique thing about batch workloads is that while they're not latency sensitive, they do require completeness guarantees — you don't want to come back the next day to realize none of your inference requests actually executed.</p><p>With the async API, you send us a batch of requests, and we promise to fulfill them as fast as possible and return them to you as a batch. This guarantees that your inference request will be fulfilled, rather than immediately (or eventually) erroring out. The async API also benefits users who have real-time use cases, as the model instances won’t be immediately consumed by these batch requests that can wait for a response. Inference times will be faster since there won’t be a bunch of competing requests in a queue waiting to reach the inference servers. </p><p>We have select models that support batch inference today, which include:</p><ul><li><p><a href="https://developers.cloudflare.com/workers-ai/models/llama-3.3-70b-instruct-fp8-fast/"><u>@cf/meta/llama-3.3-70b-instruct-fp8-fast</u></a></p></li><li><p><a href="https://developers.cloudflare.com/workers-ai/models/bge-small-en-v1.5"><u>@cf/baai/bge-small-en-v1.5</u></a>, <a href="https://developers.cloudflare.com/workers-ai/models/bge-base-en-v1.5"><u>@cf/baai/bge-base-en-v1.5</u></a>, <a href="https://developers.cloudflare.com/workers-ai/models/bge-large-en-v1.5"><u>@cf/baai/bge-large-en-v1.5</u></a></p></li><li><p><a href="https://developers.cloudflare.com/workers-ai/models/bge-m3/"><u>@cf/baai/bge-m3</u></a></p></li><li><p><a href="https://developers.cloudflare.com/workers-ai/models/m2m100-1.2b/"><u>@cf/meta/m2m100-1.2b</u></a></p></li></ul>
<div>
<h3>How can I use the batch API?</h3>
<a href="#how-can-i-use-the-batch-api">
</a>
</div>
<p>Users can send a batch request to supported models by passing a flag:</p>
<pre><code>let res = await env.AI.run("@cf/meta/llama-3.3-70b-instruct-batch", {
“requests”: [{ “prompt”: “Explain mechanics of wormholes” }, { “prompt”: “List different plant species found in America” }] }, { queueRequest: true });
Check out our developer docs to learn more about the batch API, or use our template to deploy a worker that implements the batch API.
Today, our batch API can be used by sending us an array of requests, and we’ll return your responses in an array. This is helpful for use cases like summarizing large amounts of data that you know beforehand. This means you can send us a single HTTP request with all of your requests, and receive a single HTTP request back with your responses. You can check on the status of the batch by polling it with the request ID we return when your batch is submitted. For the next iteration of our async API, we plan to allow queue-based inputs and outputs, where you push requests and pull responses from a queue. This will integrate tightly with Event Notifications and Workflows, so you can execute subsequent actions upon receiving a response.