The author analyzes how new innovations can modify our memory of important events that occurred in the past.
The great advance in the capabilities of the artificial intelligence to create, edit, and enhance photographic images presents significant challenges with implications for the privacy, security and reliability.
Memories are important. The Photographs are one of the most appropriate means to help us rememberwith the ability to capture moments and visual experiences in a very vivid and detailed way.
Like perfume and music, you moves places, people, events and emotions; printed or stored in the memory of our mobile devices.
How much power would an AI designed to create and edit photos have in our lives? As well as the power to implant memories of situations that have never happened: modifying lived events, those real moments that humans treasure in a Photography.
It is not just about generating false images to direct the present, but the possibility of slowly and subtly editing the past, our memories, and our history?
Recent advances published by Google, On May 10, in its latest Keynote, Google i/O -the AI giant- presented impressive updates to its Google Photo app, powered by new AI features.
Creating false memories?
“Photo Find” allows you to perform improved searches by identifying people, objects or places. “Magic Eraser” allows us to easily eliminate those annoying objects (or sometimes people) that “ruin” our photos.
Finally, “MagicEditor“allows us to improve them by changing colors, lighting and other features. It has become a very powerful application, which even provides geolocation of the photos, coordinates the routes of google maps.
Paradoxically, these advanced functions could generate a setback in its development. There may come a time when the proliferation of created and edited photos will outpace the “real” ones captured by cameras, altering the data sets used to train AI models.
After consulting google bard on this issue, he replied: “It is possible that this tipping point will be reached in the next few years.”
The images cread or edited with AI They can affect the trained models themselves in several ways:
- overfitting: if those generated by AI contain features or patterns not found in the actual images of the training set, the model of AI it may overfit these features and be less accurate when producing new images.
- Biases and prejudices: if they contain biases, such as the unequal representation of certain groups of people, it can lead the AI model to learn and reproduce these biases in its predictions.
- Fake news and propaganda: they can be designed to trick the AI model into making the wrong decisions.
- Inaccuracy in predictions: AI-generated images can be less accurate and informative, resulting in models making inaccurate predictions.
Google He warned that he is working on the development of tools that help users identify images generated or edited by artificial intelligence.
- And one of them is the use of metadata that will provide the following information:
- The name of the artificial intelligence model that was used to create the image.
- The date and time the image was created
- The settings that were used to create the image.
- A statement that the image is generated or edited by artificial intelligence.
conflicting intentions
In conclusion, accompanying photographic images, generated or edited with AI, with metadata is a responsible action Google.
Surely the rest of the technology companies will fold. The intention to allow the user to maintain some control is noble, but it is possible that the proliferation of closed models, and even open ones, makes the task difficult.
Clearly, we will continue to see this effort in the coming months. History will tell us when it has been achieved.
*Daniel José Feijo is director of Computer Engineering at UADE