Skip to main content

In recent years, the capabilities and accessibility of tools based on artificial intelligence have increased exponentially. Not only do these AIs assist us on a daily basis from our smartphone, but, over time, they have turned into real tools for which the only limit of use becomes the imagination.

The most popular case is undoubtedly DALL-E, a system based on artificial intelligence that generates images starting from a text. The more detailed the input, the closer the results will be to what you want.

With these premises, we want to share our experience with you, which we believe can help you understand the enormous potential of this tool.

Content creation e AI

We recently found ourselves in a situation that is quite common for a Content Creator. We had to create an image to use on Instagram for the launch of a new product within an e-commerce that we have been following for some time.

One of the features we love about this project is the fact that the client leaves us a lot of freedom in establishing the content of the communication. This allows us to travel with our imagination and produce fun and unusual content. The project we are talking about is lacanapalegale.it

Very often the products of this unusual e-commerce have decidedly bizarre or particular names: Orange Gelato, Blueberry x Melon, Cannatonic and Green Poison. It is on this last genetics that we had to focus for the creation of the Instagram post. In our imagination, the idea was vivid: a poison bottle filled with product, illuminated by a green light and surrounded by smoke.

Usually the process to generate an image of this type develops in two ways:

  • thanks to a set and a camera, physically building the scene and photographing it (solution that exceeded both budget and available time)
  • or – solution that we had already used on other occasions – produce the image with photoshop.

The latter was precisely for us the most accessible and effective solution, but the problem that arose was another: the images. Photomontages have the limitation of always having to have a starting point, to which to combine other elements to create what you want to achieve. After researching the different repositories we source from, we realized that there was nothing remotely similar or close to what we had in mind.

After evaluating the different options we therefore had to find an alternative solution to these more traditional ones. This is how we decided to experiment with a new way of producing images that has been widely used in the last period: DALL-E artificial intelligence.

AI at the service of design

With DALL-E, our way of producing images saw a turning point: finally we could create images without any kind of physical or budget constraint, because different alternatives can be generated in just a few minutes. The limitation of the AI ​​is in fact in the “understanding of the text” or in any case in giving the correct interpretation to the words that are inserted in the prompt. Indeed, it interprets the description it receives, but in order to be able to “see live” the image in our head we had to describe it as precisely and accurately as possible.

After several unsatisfactory tests, we tried to build the image no longer starting from the subject, but from the context. DALL-E in fact allows you to select a portion of the image and modify only that. In this way we were able to first generate the bottle of poison in a dark context, and then go on to insert the product later.

In doing so we managed to compartmentalize the image and consequently its generation, reducing the possibility of AI error. At that point it was enough for us to put the finishing touches and insert the texts with photoshop to get what we then posted online.

Creating images with artificial intelligence: it’s all about what and how you say it!

This creation process made us think a lot about how to produce images for social media. We often have to resort to stock images, which may not be exactly what we are looking for, but to which we have to adapt as it is “what the market offers”. Furthermore, you don’t always have the resources to create professional photo shoots that reflect the idea you have in mind.

What did we take away from this experience?

Definitely the pleasure of seeing your idea take shape with such an unusual and immaterial process. At first we weren’t sure that this solution would bring the expected results but our determination didn’t allow us to stop.

We were curious about the effort such a process would require. To be honest, it wasn’t easy: a definite idea was needed, a performing tool and a descriptive capacity well calibrated on the comprehensive capacity of the tool. The big difficulty, in fact, was understanding what and how much to say about the product we wanted.

In fact, with DALL-E it is not a question of going step by step towards a result but it is a question of instantly verifying whether the words used have given life to what we imagined. Nothing can be taken for granted and, above all, you can’t expect an artificial intelligence to read your mind (for now) and know that you want a green light or that the poison flask you are referring to is to be seen standing and not leaning on its side.

We would like to see other works of this kind that make us understand how far we can get with a linguistic and non-technical effort to give life to the image that we actually have in our brain.

While we were writing the conclusion of this article, a question also arose: what would happen if this technology were applied to 3D printing? What would stop us at that point?