Daily Reading List – April 14, 2025 (#532)
I’m digging out of a reading backlog after last week’s full-time conference program. Today’s reading list has some remaining Cloud Next content, but also other interesting tidbits.
[blog] DolphinGemma: How Google AI is helping decode dolphin communication. Let’s do this. The potential for things like this seems enormous.
[article] Google mobilizes partners to drive agentic AI across clouds. I like that our agent-to-agent protocol has buy-in across SaaS vendors and implementation partners. It’s not an engineering-focused spec; it’s about solving real problems. More here.
[blog] Designing software that could possibly work. Sean looks at tracing user flows is a good way to plan your software.
[blog] Always deploy at peak traffic. Short post, bold advice. But the reasoning appears sound to me.
[blog] Announcing Genkit for Python and Go. This general purpose framework for building AI apps keeps expanding its list of supported languages.
[blog] What’s new in Firebase at Cloud Next 2025. Related, a huge week from Firebase. From relational databases to AI-driven dev experience, Firebase is a reclaiming mindshare.
[post] Just did a deep dive into Google’s Agent Development Kit (ADK). Here are some thoughts, nitpicks, and things I loved (unbiased). I read through this feedback from Redditor related to our new ADK. Good points!
[blog] Prompt Engineering Techniques with Spring AI. These techniques can apply to multiple frameworks, but this is shown in context of Spring’s useful AI extension.
[blog] New GKE inference capabilities reduce costs, tail latency and increase throughput. Many folks are using Kubernetes as their foundation for training and serving models. This inference gateway looks powerful.
[youtube-video] Why Spring Boot is the BEST Backend Framework. I respect those who share absurdly aggressive opinions, whether I agree or disagree. Take a stance!
[blog] Taming the Wild West of ML: Practical Model Signing with Sigstore. This looks like an important update to AI security, and something you should keep an eye on.
[codelab] How to host a LLM in a sidecar for a Cloud Run function. This is a bonkers scenario, but the more I thought about it, the more intriguing it seemed.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below: