Original paper: Section "GAN paper".
The GAN paper itself does a bit of this, cool hello world:
Generative adversarial network illustrates well AI brittleness. The input looks obvious for a human, but gets completely misclassified by a deep learning agent.
This is going to be the most important application of generative AI. Especially if we ever achieve good text-to-video.
Image generators plus human ranking:
- pornpen.ai/ a bit too restrictive. Girl laying down. Girl sitting. Penis or no penis. But realtively good at it
- civitai.tv/. How to reach it: civitai.tv/tag/nun/2/
www.pornhub.com/view_video.php?viewkey=ph63c71351edece: Heavenly Bodies Part 1: Sister's Mary First Act. Pornhub title: "AI generated Hentai Story: Sexy Nun alternative World(Isekai) Stable Diffusion" Interesting concept, slide-narrated over visual novel. The question is how they managed to keep face consistency across images.
Very useful for idiotic websites that require real photos!
- thispersondoesnotexist.com/ holy fuck, the images are so photorealistic, that when there's a slight fail, it is really, really scary
This just works, but it is also so incredibly slow that it is useless (or at least the quality it reaches in the time we have patience to wait from), at least on any setup we've managed to try, including e.g. on an Nvidia A10G on a g5.xlarge. Running:would likely take hours to complete.
time imagine "a house in the forest"
Conda install is a bit annoying, but gets the job done. The generation quality is very good.
Someone should package this better for end user "just works after Conda install" image generation, it is currently much more of a library setup.
Tested on Amazon EC2 on a g5.xlarge machine, which has an Nvidia A10G, using the AWS Deep Learning Base GPU AMI (Ubuntu 20.04) image.
First install Conda as per Section "Install Conda on Ubuntu", and then just follow the instructions from the README, notably the Reference sampling script section.This took about 2 minutes and generated 6 images under
git clone https://github.com/runwayml/stable-diffusion
cd stable-diffusion/
git checkout 08ab4d326c96854026c4eb3454cd3b02109ee982
conda env create -f environment.yaml
conda activate ldm
mkdir -p models/ldm/stable-diffusion-v1/
wget -O models/ldm/stable-diffusion-v1/model.ckpt https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
outputs/txt2img-samples/samples
, includining an image outputs/txt2img-samples/grid-0000.png
which is a grid montage containing all the six images in one:TODO how to change the number of images?
A quick attempt at removing their useless safety features (watermark and NSFW text filter) is:but that produced 4 black images and only two unfiltered ones. Also likely the lack of sexual training data makes its porn suck, and not in the good way.
diff --git a/scripts/txt2img.py b/scripts/txt2img.py
index 59c16a1..0b8ef25 100644
--- a/scripts/txt2img.py
+++ b/scripts/txt2img.py
@@ -87,10 +87,10 @@ def load_replacement(x):
def check_safety(x_image):
safety_checker_input = safety_feature_extractor(numpy_to_pil(x_image), return_tensors="pt")
x_checked_image, has_nsfw_concept = safety_checker(images=x_image, clip_input=safety_checker_input.pixel_values)
- assert x_checked_image.shape[0] == len(has_nsfw_concept)
- for i in range(len(has_nsfw_concept)):
- if has_nsfw_concept[i]:
- x_checked_image[i] = load_replacement(x_checked_image[i])
+ #assert x_checked_image.shape[0] == len(has_nsfw_concept)
+ #for i in range(len(has_nsfw_concept)):
+ # if has_nsfw_concept[i]:
+ # x_checked_image[i] = load_replacement(x_checked_image[i])
return x_checked_image, has_nsfw_concept
@@ -314,7 +314,7 @@ def main():
for x_sample in x_checked_image_torch:
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
img = Image.fromarray(x_sample.astype(np.uint8))
- img = put_watermark(img, wm_encoder)
+ # img = put_watermark(img, wm_encoder)
img.save(os.path.join(sample_path, f"{base_count:05}.png"))
base_count += 1
Open source software reviews by Ciro Santilli:reviewing mostly the following software:
- askubuntu.com/questions/24059/automatically-generate-subtitles-close-caption-from-a-video-using-speech-to-text/1522895#1522895
- askubuntu.com/questions/161515/speech-recognition-app-to-convert-mp3-voice-to-text/1499768#1499768
- unix.stackexchange.com/questions/256138/is-there-any-decent-speech-recognition-software-for-linux/613392#613392
Hello world: askubuntu.com/questions/380847/is-it-possible-to-translate-words-via-terminal/1309774#1309774
- 2023 vimalabs.github.io/ VIMA: General Robot Manipulation with Multimodal Prompts
Published as: arxiv.org/pdf/2304.03442.pdf Generative Agents: Interactive Simulacra of Human Behavior by Park et al.
Highly automated wrapper for various open source LLMs.
curl https://ollama.ai/install.sh | sh
ollama run llama2
And bang, a download later, you get a prompt. On P14s it runs on CPU and generates a few tokens at a time, which is quite usable for a quick interactive play.
As mentioned at github.com/jmorganca/ollama/blob/0174665d0e7dcdd8c60390ab2dd07155ef84eb3f/docs/faq.md the downloads to under
/usr/share/ollama/.ollama/models/
and ncdu tells me:--- /usr/share/ollama ----------------------------------
3.6 GiB [###########################] /.ollama
4.0 KiB [ ] .bashrc
4.0 KiB [ ] .profile
4.0 KiB [ ] .bash_logout
We can also do it non-interactively with:which gave me:but note that there is a random seed that affects each run by default.
/bin/time ollama run llama2 'What is quantum field theory?'
0.13user 0.17system 2:06.32elapsed 0%CPU (0avgtext+0avgdata 17280maxresident)k
0inputs+0outputs (0major+2203minor)pagefaults 0swaps
Some other quick benchmarks from Amazon EC2 GPU, on Nvidia T4:On Nvidia A10G:
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
0.03user 0.05system 0:09.59elapsed 0%CPU (0avgtext+0avgdata 17312maxresident)k
8inputs+0outputs (1major+1934minor)pagefaults 0swaps
So it's not too bad, a small article in 10s.
It tends to babble quite a lot by default, but eventually decides to stop.
TODO is it possible to make it deterministic on the CLI? There is a "seed" parameter somewhere: github.com/jmorganca/ollama/blob/31f0551dab9a10412ec6af804445e02a70a25fc2/docs/modelfile.md#parameter
By Ciro Santilli:
Other threads:
- www.reddit.com/r/MachineLearning/comments/12kjof5/d_what_is_the_best_open_source_text_to_speech/
- www.reddit.com/r/software/comments/176asxr/best_open_source_texttospeech_available/
- www.reddit.com/r/opensource/comments/19cguhx/i_am_looking_for_tts_software/
- www.reddit.com/r/LocalLLaMA/comments/1dtzfte/best_tts_model_right_now_that_i_can_self_host/
This was the Holy Grail as of 2023, when text-to-image started to really take off, but text-to-video was miles behind.
Articles by others on the same topic
There are currently no matching articles.