AI is overrated
Artificial Intelligence (AI for brevity from now on) is the future.
There’s no doubt here. It brings us closer to the vision that brightest minds of last centuries have predicted. We are close to the point when machines get better than us in general, not only in doing specific calculation tasks. There are different points of view whether it’s good or bad, but both enthusiasts and critics agree that it happens soon.
Thousands of engineers, entrepreneurs and visionaries around the world work in perfect unison to develop AI. Some of them are so bullish on AI that they rely on it too heavily, in my opinion. There’s a lot of buzz around the technology and sometimes AI is used prematurely in areas which are not yet ready for it.
Computers are really good in cracking numbers and it makes possible to automate whatever process that can be explained by an algorithm that works with numbers. There are methods of machine learning that make algorithms improve themselves and they work better if they are based on bigger amounts of raw data.
If you provide machine human portraits and teach it to recognise face features, it will then become capable to create portraits of people that never exists. Depending on amount of portraits that were used as a learning sample, results will vary in quality, but the more you provide, the better machine will become.
Now, if you ask real humans to choose images of people they like and provide this information to the same machine learning algorithms, the result can be adjusted to produce only the portraits that these humans find especially beautiful.
However, this sample data might not contain everything that this or another person likes. So, results will be always adjusted to the base data that was loaded at the first step of the process.
In other words, algorithms are as good as the learning base they use.
Now, imagine you are a social network with billions of users and each of them is very explicit in liking some stuff. You might think that you have enough data for your algorithms to choose what your users will like next. The ugly truth is - you will never have enough data to do it really well.
First of all, not all potentially important content gets created, so it is impossible to teach AI about it.
Second, there’s no one to one relationship between “best” and “popular”. People want to get information which is best suitable for them. AI can only take into consideration user’s demographics and likes - this is not enough for choosing the “best” content.
Third, even if “best” would equal “popular”, not all important content is really “liked” - there’s no obligation to press that thumbs-up icon and more people use option to reserve their feelings for themselves than you can imagine.
Let’s say, these are just three factors that influence the quality of the feed we get served to us. If AI would be accurate in 50% of the cases with each of this factors (I am now incredibly favourable to AI capabilities), then final result is 50% of 50% of 50% good. It’s just 12.5% good - and it’s based on extremely optimistic assumption and reality can be 10 times worse. And this was very simplified calculation, as there are way more factors involved.
Now, if AI-based feeds are just 10% good, should they be used as default? Not at all, if you ask me. But for overly AI-optimistic companies out there, it’s not just default, it’s the only option! There’s no way to control your own feed in Facebook and Instagram. AI fully decides on your behalf what might be interesting to you.
I find this inappropriate. The power of AI could (and should) be used to provide us with suggestions - this is what AI can be good at, because there’s an algorithm for that. But it should not replace our own reasoning of finding out what’s good and what’s not. In the end, that’s what still makes us human. What belongs to humans, should rest with humans.