The Limits Of Artificial Intelligence

The Limits Of Artificial Intelligence

We have become accustomed to moving artificial intelligence (AI) close to human intelligence in everyday practice. This fuel reports that many AI systems now perform what some consider intelligent and creative.

For example, AIVA (Artificial Intelligence Virtual Artist) should compose music or other applications to display photos of people more realistically. Nevertheless, doubts about the performance of artificial intelligence are now growing. The trade journal Chip identified the first cracks in the June issue. Concerns about the performance of the language assistants from Amazon, Google, and Co. are being raised in more and more specialist blogs.

Artificial intelligence aims to enable machines to carry out tasks “intelligently.” However, it is not specified what “intelligent” means and which technologies are used, explains the Fraunhofer Institute. This conceptual proximity to human intelligence leads to the nonsensical question of when a machine can imitate human intelligence in the technological world.

In particular, consulting firms associate artificial intelligence for use in finance with great promises. For example, the management and technology consultancy Accenture writes in a study: Artificial intelligence will enable financial services companies to redefine how they work completely. It is gradually developing into a concept that could turn the entire value chain of the financial sector upside down.

Typical Areas Of Application In Banking

The various technologies of artificial intelligence did not first appear in the financial sector with the modern digitization of the 2010s. Artificial neural networks on the stock market were already being considered in the 1990s . Artificial neural networks are processes based on how the brain works, which record information, weigh it based on previous knowledge, and, for example, serve to recognize images, language, and patterns. With the application of the concept, which was already being considered in the 1940s, one wants, for example, to better predict the course of share prices.

Specific use cases in finance include the early detection of cyberattacks, fraud detection, or the detection of money laundering activities. For this purpose, a system will train with the transaction data of past cases, for example. For example, it will hope to select fraudulent transactions with similar patterns so that a human processor can verify this. If the machine recognizes incorrectly, it learns from the feedback of the human operator. The system can only ever be as good as the material used as training data. New fraud variants cannot detect in this way if the system previously classify these variants as harmless.

The risk assessments of loans by the start-up show that something like this can be error-prone. It works with machine learning algorithms that can determine the default risk for loans from a large amount of data . If the algorithms work well, then the total costs for defaulted loans should be lower than the interest income adjusted to the respective default risks. That doesn’t always seem to have been the case in the past, as reported by the financial and start-up scenes. The authors questioned whether the in-house algorithms that it wanted to use to revolutionize global lending might not be working correctly.

Advice From Robots Is Reaching Its Limits

Speech recognition ( also of emotions, by the way) now works very well with the help of neural networks. Specifically, the systems recognize sounds like speech and convert them into text. However, this conversion does not mean that the actual request for information or an order can derive from the content of the text. Some may know this from using Amazon’s “intelligent speaker,” Alexa. It can now recognize speech very well and convert it into text. Often, however, he cannot do anything with orders or questions. If you ask Alexa, for example, what artificial intelligence is, you will get a suitable short answer. But if you ask why Alexa is an artificial intelligence, the system does not give a meaningful response.

A free dialogue in which, for example, an AI system will suppose to replace a consultation ends relatively quickly in frustration. So-called chatbots (text robots that can conduct dialogues) can only give more. Or less meaningful answers to relatively simple standard questions, but these usually come from a knowledge database. When it gets a little more demanding, chatbots generally reach their limits. This also applies to the systems developed by Google, Amazon, and Co with significant financial resources.

Use cases in which answers will determine from a knowledge database or an expert system. Or according to a formula do not count as artificial intelligence. Formula or rule-based applications include most Robot- advisors, i.e., digitally automated investment advice. Based on specific questions, they create proposals for the composition of assets, buy and sell the related securities. Robot-advisors provide relatively good but rarely above-average investment results.

Also Read: AI In Medium-Sized Companies