From time to time I will write short essays on artificial intelligence (AI). AI is something that I know little about, so please forgive my ignorance. Wait… What I just said already sounds dangerously foolish because, if I am not an expert, I should probably pipe down. But I won’t be quiet because AI is now everywhere and, at its core, it commonly employs linear regression analysis, which has been around forever. At least this is what I would like to believe at this stage.
I am starting to read Bostrom’s “Superintelligence” and it paints a rather gloomy picture that is in store for us if things go astray. The point is that machines may one day surpass humans. Our fate will then depend on the actions of some powerful AI system, but I have a strong feeling that this won’t happen for a while because training sets are created by humans. Having said that, I would also hate our role to be relegated to the generation of those training sets. Don’t laugh: plenty of Edisonian science feeds there, so many of us are at it already.
Here is an uncomfortable thought I just had: a scary moment is a day when our own intelligence will be referred to as some primitive proteinaceous intelligence (PI). As a corollary to that, there will be nothing artificial about AI. Think about it.