We all remember HAL from “2001: A Space Odyssey” (or at least I DO) and I, for one, am a fan of “Person of Interest”, but this is not science-fiction anymore, nor is it illusion or delusion…
While Forbes indicated 27 examples of uses of AI (Artificial Intelligence) and ML (Machine Learning) in practice nearly 3 years ago (30 April 2018), a more recent internet search on AI and ML applications indicates top 11 areas as:
- Chatbots
- eCommerce
- Improving Workplace Communication
- Human Resource Management
- Healthcare
- Cybersecurity
- Logistics and Supply Chain
- Sports betting Industry
- Streamlined Manufacturing
- Casino/Hotels/Integrated Resorts
- Retail
Of which some are quite familiar to us with which we feel utterly comfortable, some we would be more than delighted to get rid of, some look indeed amusing to witness happening – even for the exploration sake from a lay-person’s or outsider’s point of view, but some are mind-provoking if not discomforting or even frightening.
What’s Missing?
One missing area, however, which was included in Forbes report but does not seem to be trendy nowadays is “Creative Arts” [and Design]. One can rightly argue that design means different things to different people or in different disciplines, ranging from architectural design, engineering design, industrial design, manufacturing design and mechanical design, to graphic design, animation Design, interior design, motion graphics design, product design, fashion design, and UI/UX Design. We totally relate to this argument and as such succumb to the idea that as such ML and AI will have different level of applicability to different design disciplines.
If design is broadly taken as a subset of problem solving process which includes all or a number of the constituents of problem solving, namely i) defining a/the problem; ii) establishing the cause or reason of that problem; iii) identifying alternative solutions; iv) prioritising the alternatives according to the given problem and other context conditions; v) selecting the best alternative; and (potentially) vi) implementing the solution, (with a feedback loop for continuous improvement, depending on the type of approach to problem solving), then some of reiterative processes along the way can be managed through machine-assisted algorithms.
To rely on an automated system for a series of reiterative tasks which would result in a meaningful outcome – to solve a problem, to produce an artefact or an edifice, to test out a series of options, to help with a decision process, etc. – it requires a system which is capable of running those tasks normally in a significantly high reiterations at a speed and consistency level which is not possible for or appealing to us as human-beings.
Where does AI come in?
Here is where AI will come handy, and in close conjunction with it comes ML; a reiterative and accelerated solution-based learning process enabled through machine (computer) operations; and this is just the basic/original definition. I have intentionally used solution-based as opposed to problem-based approach to learning as machine’s approach to problem-solving is more concerned about the solution than the problem itself thereby a more suitable approach to ML for AI for problem solving is more solution-based than problem-based. But why is this important? It is important because machines, due to their nature, are set up to achieve certain results/solutions for certain given problems and throughout the process of reaching at those results or achieving those solutions, there is very little if any room for any non-materialistic concerns and among which ethics as one of the, if not the, most important determinant.
How dangerous is AI?
Notwithstanding all the ethical issues and concerns involved in this discussion, in the era of machines and together with them we all are very well acquainted with human errors, where a single one could have led to major catastrophe such as Bhopal Industrial Disaster (1984), Chernobyl Accident (1986) or Tokaimura Criticality Accident (1999). Now imagine such error being multiplied by and through the reiterative processes of ML; the catastrophe or even Apocalypse would be well on the horizon. But can this happen at all? Are machines not ruthlessly famous for being heartlessly infallible? The answer is, yes, they are. If so, how can they ever make mistakes? They cannot, is the answer, but… and there is a big but in here… The process inherently associated with ML is very simple but such that if a tiny uncertainty or a flavour of a less than 100% correct solution finds its way to the system, then that mistakes gets exponentially multiplied due to the number of reiterations in the process and grows in its size, impact, frequency, and probability of it happening.
Moreover, ML models are more often than not infamous for being black boxes; what you will feed in with known inputs and will be expecting to produce desired outputs without really knowing how the outputs are inferred from the inputs and this is where danger number 1 emerges: but GIGO or RIRO, as they say and we already know that we have wrongly let our AIs to learn to be racist and have been warned earlier on about the rise of the racist robots.
The danger number 2 arises as and when the AIs start, probably most likely based on their very nature, rightly or wrongly, to either resort to shortcut in their reiterative tasks or develop a high- (or low-) level alternative logic or rules (in/of interaction/communication) which carry more characteristics of their black box working mechanism into outer world then what can be perceived and comprehended by us as human-beings. Yes, I know it might be farfetched, but it is not untrue. We all remember not many years ago when Facebook allegedly announced the shutdown of their AI Robots after they started talking to each other in an unknown language; it doesn’t really matter as the machines were supposed to get us to the desired outcome without us being really aware or concerned about the black box operations in-between, right? And that’s cool, right? No, not really! Rather creepy… even dead scary!
What’s the designer’s point of view?
Nowadays we hear about Flatpack ML where we need the hardware with optimised cores for ML (which can potentially be very straightforward to procure or even hire), high-level development platforms and languages, and finally open model repositories from which we can pick and choose pre-trained models which means no longer do we need that level of expert knowledge to construct the model. So this is the infrastructure of flatpack ML which means we do not need to be ML experts to use ML; instead we can concentrate on what we do best as the users, and/or the designers, we do not need to know how the black box is programmed (as opposed to few years back when we at least needed to be able to programme it, and let it develop at its own discretion). And that all sounds promising in leading us from a technology-led development, where user experience is more of an afterthought, to a design-led development where user experience comes to equation from the start; thereby promoting the very ethically-correct human(e)-centred AI/ML.
And finally, yes, this seems to be cool; yet with some second thoughts what it really is, is that it detaches the users from the idea of the black box and lure them even more into training the machine, in hope that one day, thanking to be allowed to learn enough, it will find a way to develop some level of conscience and conscientiousness to do what is in the best interest of humankind. That’s some good level of wishful thinking and I would very much relate to it but also have my reservations about it which is backed up by what we have all seen and/or heard about if, when and what if things go wrong… and that’s not illusion or delusion or science-fiction anymore… watch “Person of Interest”!
* The idea of this blog is owed to my colleague and friend Dr Marcus Winter at School of Computing, Engineering and Mathematics, University of Brighton.
2 thoughts on “Machine Learning and Artificial Intelligence in Design”