(ETX Daily Up) – Google releases a new type of artificial intelligence, “Imagine”, capable of creating an image from a simple text description. This new playground will allow artists or companies to give free rein to their creativity. But “Imagen” is, at least, not for the general public. The error is a big problem, which is still difficult to solve: algorithm dependencies.
One year ago, OpenAI, Published by Dall-E, An artificial intelligence model capable of creating an image from scratch text. Google Research Lab releases today “Imagination“According to the American company, a new model that is more efficient and powerful. Google describes this invention as” a text-to-image delivery model with unprecedented levels of light and deep language comprehension. “
There is a very simple fact behind this explanation: only one text is required to create a myriad of high quality and realistic images. “Imagen” can combine ideas and attributes to create an image of its choice. Among the various demonstrations available on the site we see a small house made of corn cob or sushi. With fast, effective, personalized communication campaigns we can easily find a specific application for most digital companies. For artists, the potential of creation can enhance their universe in many ways.
Introduces Image, a new text-to-image compilation model that can produce high-fidelity, optical images from a deeper level of language comprehension. Learn more and see some examples # Image Done https://t.co/RhD6siY6BY pic.twitter.com/C8javVu3iW
—GoogleAI (oGoogleAI) May 24, 2022
The possibilities offered by these models are almost endless. Still, a consumer tool is not on the agenda because of a thorny problem: Algorithm dependencies. In its most concise definition, the algorithmic function is that the effect of the learning algorithm is not reasonable, i.e. it uses a very large number of man-made data, so it is not neutral. To build and operate these models that process large amounts of data, engineers train them to the maximum using in-depth learning methods.
Stereotypes and prejudices still exist
The aim is to be able to respond to the user’s request with maximum accuracy. To achieve such a feat, you need to process the data massively and in all their formats. Data libraries from the Internet were strong in the early days of artificial intelligence. The latter includes everything that can be found on the Internet, identical, prejudices or prejudices.
At the presentation of its new product, Google warns again about this reality, which prevents the company from using its model. “There are many ethical challenges to text-image search,” says Google. The downhill applications of text-image models are diverse and have a complex impact on society. Responsible transparency of code and demos “.
For now, and for Dall-E, the US company has decided to release the code or make a public demo. “Preliminary assessment also suggests that Imagine symbolizes multiple social biases and identities, including the general bias for creating images of people of light complexion and the tendency to align images depicting different professions of the same gender.” The company hopes to open its model firmly and make further progress on these next challenges to avoid potential risks and abuses.
Axel Barre
“Alcohol evangelist. Devoted twitter guru. Lifelong coffee expert. Music nerd.”