In the future, AI systems will increasingly support us in everyday life. This is no wonder, because some tasks are often taken on more reliably and quickly by a machine. It is therefore of immense importance that such systems are secure and difficult to manipulate. After all, nobody wants their own safety to be at stake if Artificial Intelligence suddenly makes wrong decisions.
A system that still has to struggle with a few errors is CLIP. CLIP is still in development and is being developed by OpenAI, a company that has already gained some experience in the field of AI. The bow allows any object to be transformed into another in a matter of seconds. But see for yourself:
The reason for this development is the approach uses the CLIP. The system should learn to identify objects independently. A database with 360 Millions of images used, in this way “multimodal neurons” should arise. According to the manufacturer, these are individual components of the neural network that recognize not only images, but also sketches, cartoons and the associated text.
In this way, the algorithm could learn, for example, to interpret certain content and not just to identify objects. This can lead to even better artificial intelligence in the future. But there is still a long way to go, the current state of development is turning a chainsaw into a piggy bank, for example, dollar signs mislead the system.
As with other systems, prejudices are also a big problem with CLIP. In this algorithm, certain words are associated with certain properties. The Middle East, for example, is associated with terrorism, and dark-skinned people are associated with gorillas. The researchers still have to work on this.
As well as AI systems can do the work for us today, they can still be easily misled in many ways. It remains a major challenge to develop algorithms that are fair, intelligent and efficient at the same time. Once we have managed that, we will lead a simpler life in the future.
Via The Verge