- Sapien Dispatch
- Posts
- Sapien Weekly Digest - January 30th
Sapien Weekly Digest - January 30th
Hey there, Sapiens! We are gearing up to become loud in the new year.

What we’ve been up to
Ready to go:
After speaking to partners, builders, developers, and the people driving AI agents along, we’re excited to show you what we’ve been up to.
Expect a final date throughout the next week.
If trust is a concern for your AI agents, you will want to participate.
If you’re a builder or you know anybody who is, hit us up at [email protected] or fill out the form right here!
Sapien Team Takeover:
“A CFO should view PoQ not as a data expense, but as essential risk infra expense. It is the insurance policy that allows the enterprise to safely scale AI.”
Today, our final Discord Team Takeover for January hosted the incredible Reed Ponak, our Head of Finance. Reed went into detail about how Proof of Quality should be looked at from a finance brain’s perspective and shared some fascinating tidbits of the industry at large.
Read the full transcript right here!
🚀 What’s New in the App?
While we’re still in the transition phase, the following tasks are available and taken care of for you!
Current task Overview:
😂Thinking Comic Lines
😂Thinking Comic Lines QA
👩🍳 Gastro Tag – Easy / Intermediate / Hard
👩🍳 Gastro Tag QA – Easy / Intermediate / Hard
💬 Emotion Prompt – Easy / Intermediate / Hard
💬 Emotion Prompt QA – Easy / Intermediate / Hard
🧩 Multi Choice Error Review – Easy / Intermediate / Hard
📦 Logic Path Vietnamese QA
Our Voices in the World
“If AI is the future, data is the bottleneck.”
Our incredible co-founder Trevor Koverko found himself across the table from Amanda Whitcroft at the Get to the Point Podcast to chat about why keeping humans in the loop will be the key that separates AI from Quality AI. Make sure to give them some love!
“Bad data isn’t a sourcing problem. It’s a verification problem.
You can improve every part of the process and still have no way to prove what you got is trustworthy.”
Our CEO & Founder Rowan went into detail about how and why we built Proof of Quality in an in-depth thread on X. We’re so excited to finally share with you what we’ve been building these past few months.
What else happened in AI?
The World outside of Sapien:
Moltbook lets Agents talk amongst themselves:
Moltbook emerged as a purpose built social platform for AI agents, explicitly designed for autonomous agent posting and interaction, with humans explicitly welcome to observe. The website is separated much like Reddit into self organized “submolts,” framing a new test bed for multi agent social dynamics and governance behaviors.
Moltbook bootstraps via the OpenClaw skill system. A single markdown skill file contains installation commands that pull multiple artifacts locally, the skill then provides API level commands for registration, reading, posting, commenting, and even creating submolt forums.
Moltbook demonstrates a scalable pattern for turning autonomous tools into agents with a genuine social graph and autonomous cadence with minimal friction. The skill distribution model also turns social participation into an extensible plugin surface: if agents can install skills to join Moltbook, they can also install skills advertised inside Moltbook content, which created the world’s first autonomous compounding distribution loop.
Exploring unlimited worlds with Google Genie 3:
Google made Project Genie available to Google AI Ultra subscribers in the US as an interactive world generation prototype powered by Genie 3, supporting text and image prompting and real time navigable environment generation.
Genie 3 is described as a general purpose world model that generates photorealistic environments from text that can be explored in real time. The published capability targets include real time operation at 20 to 24 frames per second and 720p rendering. DeepMind also emphasizes world consistency: previously seen details can be recalled when revisited, and environments can sustain interaction without degrading. To that end, the page frames the central requirement as rapid recall of prior context and actions so the model can respond multiple times per second to user instructions. The world is generated as autoregressive frame by frame, conditioned on world description and user actions, with environments remaining largely consistent for several minutes and memory recalling changes for up to about a minute.
A 12% reduction in cancer diagnoses:
The Lancet published full MASAI trial results on AI supported mammography screening in a Swedish population based randomized design. Over 100,000 women in Sweden between April 2021 and December 2022 across four screening sites to either AI supported screening or standard double reading by radiologists without AI. Reported outcomes include a 12% reduction in interval cancer diagnoses over two year follow up, higher screen detected share of cancers, and similar false positive rates, with reductions in invasive, large, and aggressive subtypes in the AI supported arm.
In the AI arm, the AI system triaged low risk cases to single reading and high risk cases to double reading, and it highlighted suspicious findings to support radiologists. During two years of follow up, the AI arm had 1.55 interval cancers per 1,000 women, 82 out of 53,043, versus 1.76 per 1,000, 93 out of 52,872, in the control arm.
Interval cancer is the safety relevant failure mode for screening: cancers missed at screening and discovered before the next round, so a reduction here is more meaningful than detecting more lesions if those detections do not translate into fewer missed clinically relevant cancers.
One out of twelve months already dealt with. 2026 will be a rapid year.