To AI or to not AI, that is the question: Whether 'tis nobler in the mind to suffer The slings and arrows of outrageous fortune, Or to take arms against a sea of troubles And by opposing end them. To die—to sleep, …
The main plot
If you look up for cons, dangers or disadvantages of AI on the internet, the search generates relatively trivial results: cost, addiction, lack of personal connection, unemployment, efficient decision making, loss of information; OR automation-spurred job-loss, privacy violation, deepfakes, algorithmic bias caused by data, socioeconomic inequality, weapons automation. Some of those issues are what we have been dealing with since the first industrial revolution and some are our everyday problems, worries or concern, so nothing is much of an unexpected novelty or surprise. Besides, the list hardly goes beyond 6 while the benefits list easily hits 15 and above. So, the benefits outnumber the disadvantages by 2.5 times so we should all be well and safe, right? Yes, for the time being. However, …
Against all the odds, when it comes to AI, it is not the bad news that makes the news; it is just the good news that we all hear and probably are as well interested in. But if and when it happens that you look a little bit further down the line, there is yet another side to this story that unfolds. This side of the story, however, is narrated with due diligence and extreme cautiousness. The argument for and against “Artificial General Intelligence” – which I would like to call “independent Artificial Intelligence” or “iAI” – is one and probably the most recurring concept at the centre of such theories and debates on and around AI. When this would happen remains as a matter of debate; some argue that it is just around the corner, some others do not believe it is going to happen not sooner than in 50 years’ time.
The former group refer to Moore’s Law as the theory basis for their argument. The latter argue that we could create AI by managed/guided evolution of a simple algorithm through to complete emulation of human brain and that while the first one is much easier to create, the second one is much easier said than done. They refer to Nick Bostrom’s book capitalising on the concept of superintelligence where AGI or iAI will evolve enough to turn itself into an “agent” which aims to reach human level of intelligence to then be able to develop itself into a superintelligence through recursive self-improvement. Whether we are an advocate of the first theory or the second, once this “intelligence explosion” occurs – whether it be in 2- or 50-years’ time – it could catch us all by surprise, probably most likely because there is no element or principle of ethics in what machines can absorb, prioritise, learn, interpret, and apply. From a different angle to Bostrom’s concept of “agent”, Drexler proposes what he calls “Comprehensive AI Services” or CIAS as an alternative route to AGI. In his view, AI drives technology improvements – both at hardware and software fronts – which can automate more complicated tasks. This results in recursive self-improvement processes in AI regime which have already long started.
I am for the Drexler’s standpoint with the idea that this process has already started more than that of Bostrom’s which suggests that the process will not start until AI develops itself to a human-like agent. However, there are some perks and disadvantages to each view. While Drexler’s view seems more alerting (and that’s why I am for it), it sees the “central AI” more like a search engine capable of being extremely quick at match-making between the given task (or the task at hand) with one or a series of algorithms which have already been developed by local/localised AIs. In other words, there is no independent decision-making capability inherent in the central AI to be performed which may enable it to mimic the human brain’s function. The Bostrom’s standpoint by contrast is more for a role of AGI as a fully-fledged super intelligent human-like brain which acts as an impenetrable, closed-circuit black box. For this I am more inclined to take upon Bostrom’s theory while his optimistic view of how far we still are from this concept to literally “materialise itself” is what puts me off. In a sense one can say that I am eclectic picking up on the human agent nature of AI – as suggested by Bostrom – while I believe the process has already been started and long time ago, that is what Drexler asserts.
If there is any singular answer to the question/concern/point raised – which I very much doubt there is, I for one am not claiming that I can provide it on my own. So maybe some food for thoughts and I am hoping that with some input from you, AI enthusiasts, we may be able to develop some more in-depth understanding of the issue or maybe alternatively we shall just wait and see what – our friend or foe – iAI has got in store for us which might be a definite answer as and when the time comes, supposing that there still exists such question there and then.