You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
VQGAN and CLIP are actually two separate machine learning algorithms that can be used together to generate images based on a text prompt. VQGAN is a generative adversarial neural network that is good at generating images that look similar to others (but not from a prompt), and CLIP is another neural network that is able to determine how well a c…
Pipeline to create Paper2Fig dataset, a dataset for text-to-image generation from research papers and figures (e.g., diagrams of architectures or methods in fields like Machine Learning or Computer Vision)
OCR-VQGAN, a discrete image encoder (tokenizer and detokenizer) for figure images in Paper2Fig100k dataset. Implementation of OCR Perceptual loss for clear text-within-image generation. Fork from VQGAN in CompVis/taming-transformers
Art generation using VQGAN + CLIP using docker containers. A simplified, updated, and expanded upon version of Kevin Costa's work. This project tries to make generating art as easy as possible for anyone with a GPU by providing a simple web UI.