Faster image generation, AI-powered world simulator, insights on AI dataset complexity

Editor’s note

Hey everyone! It was so nice to read all of your very positive feedback last week. I definitely didn’t mess up the reply-to address, thus dooming your well-meaning advice to unreadable limbo. That would be ridiculous.

I’ve made all the changes you requested. The letter is now shorter, more informative, easier to read, less dumbed down, more focused, general interest, new, improved and completely reinvented. I have spared no expense to make it exactly what I’m sure you asked for.

That said, if you have any other requests, please shout! As I said, this reply-to address is Fully Tested and Definitely Works Now.

deepfates


A fast, high-quality image generator

The new SDXL-Flash model follows a trend of lightweight derivatives of the Stable Diffusion XL model that generate images in fewer steps. At each step, the model diffuses the pixels produced in the last step, so a shorter number of steps means the image generates faster.

Unlike others in this category, SDXL-Flash doesn’t just aim for the fewest steps possible. Instead they train for a tradeoff between step count and image quality.

try on replicate


Cool tools

A novel interface to generative AI

Nous Research’s WorldSim is back online. The world simulator uses language models to generate a command-line interface for anything you can imagine.

This is…I don’t know, most stuff pertaining to AI and creativity leaves me very cold, but I find the possibilities and the sheer scale of the ‘imagining’ happening here to be really quite spectacular. Now if you’ll excuse me I’m just going to go and inflict some plagues on my medieval peasants and see what happens (turns out that having complete godlike dominion over a planet’s worth of nonexistent AI-generated ‘people’ is also a surprisingly decent way of unearthing any sort of latent psychoses you might be harbouring!).

— Matt Muir, Web Curios

Replicate fans will remember that WorldSim was first revealed at our San Francisco event with Nous Research long ago, in the ancient days of mid-March.

website


Research radar

Better data scales differently

Not all datasets are created equal. The so-called “Chinchilla” scaling law (named after the language model in the original study) claims to be agnostic to the type of data used; i.e., the ratio of model parameters to total tokens of data should be the same regardless of what you’re training on.

Instead, it seems that data with higher complexity (code-heavy datasets and pre-cleaned texts) actually wants more parameters than Chinchilla predicts. The author generates synthetic data with different complexity levels to confirm.

A cool finding here: the complexity of a dataset correlates closely to its compressibility — the authors find gzip a useful proxy for finding a given dataset’s ideal parameter count.

post | paper


Changelog

Delete stuff

You can now delete models, versions, and deployments using the web or the HTTP API.

changelog

Webhook verification

We’ve improved our docs and client library support for validating webhook authenticity, so you can confirm your incoming webhook really came from Replicate.

changelog


Bye for now

Thanks for reading this again. You’re the best audience a content creator could ask for. Do me a favor and forward this to someone you know?

Coming next week: a way to sign up to this newsletter. Like, on purpose!

— deepfates