AI Image Generation in the HackerNoon Editor (with Stable Diffusion)

Written by RichardJohnn | Published Invalid Date
Tech Story Tags: stable-diffusion | hackernoon-product | image-generation | hackernoon-top-story | hackernoon-editor | hackernoon-community | hackernoon-contributors | ai-image-generation | web-monetization

TLDRPublished writers can now generate images using Stable Diffusion. The more details you provide as input, the higher quality the image tends to be. A gallery of images generated in this way is coming as well as an image gallery for all users within HackerNoon. The 3.0 editor can use URLs or files on your computer though, but I’d love to see a streamlined version of images. I would like to implement the image to image feature where you can iterate on an image, changing the prompt each time to gradually sculpt the image.via the TL;DR App

Hello HackerNoon writers 👋 I have an exciting announcement to announce:

Published writers can now generate images using Stable Diffusion! 🥳

You will see a new option at the bottom in the image upload modal. It has a little eye icon 👁️ next to it, can’t miss it. Just be aware that we currently rate limit each user to 10 images an hour. If an error pops up saying NSFW content was detected, this will not count towards this limit. Just edit the prompt and try not to be so gross next time. 😁

Anywho, just type in what you’d like to see in the image and press enter or the Submit button , then wait a bit while we go talk to replicate.com to generate your image. It takes a bit because we first generate a 768 × 512 pixel image and then send the image to another endpoint to “upscale” the image using Real-ESGRAN. After this the image becomes 1536 × 1024 pixels in size and edges and small details become more defined.

Here is a before and after of that process:

Not bad… weird, but not bad!

Viola! The “shells” have finer details. All the edges are more clearly defined. The clouds and grass are more realistic in appearance. The image has become larger and more HD. 👍

Writer’s Block

Coming up with a good prompt to get the results you want is its own challenge. There are plenty of sites and documents around the web with advice and tips on how to write a good prompt. https://publicprompts.art/ is one such site. You may be surprised to see that prompts can get quite long and somewhat redundant. The more details you provide as input, the higher quality the output tends to be. Guides written for other image generation software like DALL-E 2 will also be applicable with Stable Diffusion.

Future Iterations

In the future I would like to implement the image to image feature where you can iterate on an image, changing the prompt each time (if you’d like) to gradually sculpt the image closer to your original intentions. There is also the option to mask a part of the image and only regenerate that portion of the image, this is commonly referred to as “Inpainting“.

Also, I need to improve the 3.0 editor to allow these images to be created from within the body of the story like the 2.0 editor can. Currently with the 3.0 editor, only the featured image used the uploader with the Stable Diffusion option. The 3.0 editor can use URLs or files on your computer though, so you could always work around that issue, but I’d love to see a streamlined prompt popup.

A gallery of images generated in this way is coming as well. We’d like to show this Gallery as an image source for all users within the upload, alongside Unsplash, Pexels and Pixabay. Featuring image generations that you’ve produced on your profile or about page is another idea that has been considered. I want to revisit a pixel art GIF maker that Dane created a some time ago and integrate Stable Diffusion into that. Let me know what features you’d like to see in the comment section 🙏

That’s all for now, have a great day.✌️


Written by RichardJohnn | VP of Engineering at HackerNoon
Published by HackerNoon on Invalid Date