If you pull a little On the Internet, You can not miss it: Dall-E’s works are everywhere. Distorted faces, very intense colors, even confusing distortions … sometimes the result of horrible scenes. An artificial intelligence (AI for short), I.e. a brilliant computer computing program that filters the entire web from text or image to image. In this case, there is no limit, other than your imagination: the more precise your creation, the more AI creates an image with all the elements requested. And often, it’s just too ugly and too much fun. To try, It happens hereAnd you must enter a sentence, preferably in English, and allow the machine to run.
The Doll-e Mini Project is the work of Boris Dama, and OpenAI Company, Co-founded by Elon Musk, who specializes in artificial intelligence. Dall-E Mini is a program responsible for imaging a text proposal developed by OpenAI in the 12 billion parameter version of the GPT-3 language model. This algorithm is based on so-called natural language, which means that we humans can communicate in a way that is contrary to formal languages, such as computer languages. The AI project, released in early 2021, was a huge success on Twitter in just a few weeks. Many Twitter, Discord or Reddit thread Dedicated to images created by Dall-E Mini. First, one of the main goals of OpenAI is to “give language models a better understanding of the everyday concepts that humans use to understand things”. MIT Technical Study.
Why are the images created so ugly?
Like many artificial intelligence, Doll-yum learns Deep learning (Or deep learning in VF): By processing unstructured data (sound, images, human language, etc.), the program mixes several layers of different computations and calls it a “neural network” algorithm. In the case of Dall-E between a sentence and its meaning, the computer program tries to make connections between several pieces of data. For example, we asked Dall-E to coordinate Emmanuel Macron on the show Dance with the stars. The results are amazing.
But as we can see, the result is often more fun than realistic. Although Dall-E has become a monument among memes, above all it makes it possible to put forward the superiority of artificial intelligence over the human brain. If AIs were to grow exponentially, more or less mastered, we would be a long way off I, the robot. For now, the Dall-E Mini still acts as a scare on the (limited) capabilities of AIs and the ability to shape our future … but that counts in its evolutionary Dall-E 2.
The spectrum of Dall-E 2, or AI, is more accurate than ever
Last April, more than a year after the release of the Dall-E Mini, OpenAI announced Doll-e2 origin, Says that it can create more realistic images than ever from text descriptions. According to its creators, the Dall-E 2 is described as a model that “can create original and realistic images and artwork from textual interpretations that combine ideas, attributes, and styles. The software is in beta testing with some selected users including.
Because unlike its “mini” version, which offers ridiculous combinations, Dall-E 2 now understands the relationships between objects, but retrieves and edits images realistically from the written description. According to its creators, AI can replace a portion of an image with a combination of other automatically generated images. Cooling and surprise: Dall-E becomes an image creator and can go much (and above all faster) than normal humans.
Because of this reality, OpenAI has enforced stricter rules. The group removed all violent content, filters used and any policy against nudity, conspiracy or political content. This is because, as OpenAI’s developers point out on their site, “without adequate security, models like Dall-E 2 can be used to create a wide range of misleading content”. In an age of deepfacks and fake news, OpenAI wants to avoid taking any risks … and keep in touch about its mini version and its ugly images.