The Next Challenge for/of AI


While visiting Penn State University’s Department of Architectural Engineering and Stuckeman School in 2007, I came across the concept of Emotional Intelligence in a workshop by G. Brent Darnell, an international best-selling author and leading authority on Emotional Intelligence, in CONVR conference 2007. According to HelpGuide, emotional intelligence is:

Emotional intelligence (otherwise known as emotional quotient or EQ) is the ability to understand, use, and manage your own emotions in positive ways to relieve stress, communicate effectively, empathize with others, overcome challenges and defuse conflict. Emotional intelligence helps you build stronger relationships, succeed at school and work, and achieve your career and personal goals. It can also help you to connect with your feelings, turn intention into action, and make informed decisions about what matters most to you.

In his seminal book “The People-Profit Connection: How to Transform the Future of Construction by Focusing on People”, Darnell refers to The Bar-On Emotional Quotient Inventory or EQ-iÒ 2.0 as a validated, self-perception instrument that measures five composite scales namely self-perception composite, self-expression composite, interpersonal composite, decision making composite, stress management composite, in addition to well-being indicator. He then carries on to elucidate on sixteen individual emotional competencies associated with those scales. 

Main Plot

Earlier this week, I attended a number of talks in GTC2021 Conference including a keynote by Jensen Huang, the Founder and CEO of NVIDIA. The conference was an eye-opener to what AI is and is capable of doing in all disciplines from Learning for Autonomous Robots, Machines and Cars, Deploying AI/ML from the Cloud to the Edge, Democratizing Access to Powerful MLOps Infrastructure, and Enabling GeoSpatial Intelligence to CUDA, Modern Artificial Intelligence, Learning Models, Accelerating Healthcare through AI, and Transforming the Automotive Futures and not the least AI Art and Music Galleries.

GTC 2021 Keynote with NVIDIA CEO Jensen Huang (13:00 to 16:00)

Although I am reluctant to say, I felt, yet again, that the AEC industry is a little bit lethargic in keeping up with the latest in Automation, AI and ML. On the more positive note though, the message is that the possibilities for us seem to be endless.

The general message in GTC2021 was that now, with all new advancements in AI, computers can literally learn at a lightspeed and software can develop software and solve problems at a speed and complexity level that no human-being can. With all this in mind and referring back to all concerns I raised in one of my previous posts re ethics of ML and AI and given all debates around AI being long in existence, the question which remains to be answered is Why Automation is Different this time? Here is yet another burning question for me: with all novel improvements and new advancements in Automation, AI and ML, what happens to issues and topics such as Emotional Intelligence where there is a human-factor which cannot be addressed by machines? With The Rise of the Robots, in The Second Machine Age, will we still have jobs to worry about Emotional Intelligence or does it need to be limited to non-vocational aspects of our lives? And if so, how machines (AI/DL/ML/Robots) are able to contribute to or will forcefully make their way through to such issues which only us as humans seem to be able to comprehend, benefit from or harmed by?       

Machines are learning and learning fast, and they have already initiated an unprecedented competition between us, human-beings, to teach them more and more every single second of every single hour of every single day of every single year; 24/7. What we teach them – compare that process to teaching something to a toddler but at a speed significantly higher – is somewhat unsolicited, unregulated, and unlegislated, of course all for good causes (or maybe not). There is no, as far as I am aware, international (and hardly very few national) regulatory or legislative bodies who oversee, regulate, or review those artificial learning procedures, and their content, context or methods. And even if they do, their voice is hardly ever heard in the middle of massive hype and media coverage to showcase the finest and the latest. OK to play the devil’s advocate, an ethical question which one may raise is, it is just education why it needs to be regulated or monitored? Yes, correct but we are talking about machines so the ethics does not apply to them at the end of this game; why should we raise such question or play such a card to begin with? That’s got nothing to do with ethics, it is all about legislations and regulations and should be governed by law and law only not ethics.

Now here is the extreme scenario: if and when (it is more question of when rather than if) this newly formed regiment of toddlers learn fast and enough to the extents to then be able to link with each other and form a new Master (Toddler) to be able to make such decision that enough is enough and I, the iMachine, [aka the former new Master (Toddler)] know it all, then what is our role as human-beings in this all-new equation. It may come to the conclusion that the sustainable population for the planet earth is 4.5 billion and not a single sole more and decide that the human race should be sieved and controlled; not too dramatic, you may say. But what if iMachine decides that – and this decision is based on nothing but all hard facts and figures which are above and beyond human understanding and certainly so any Emotional Intelligence – that our (or its, the iMachine’s) Mother Earth has not got enough time to wait for us to come to a balance in our reproduction and growth? In such case it would be very difficult for any sensible person to not be reminded of Nazi concentration camps but this time round we are collectively responsible for this and cannot blame anyone else but ourselves.   


To conclude, the future is not that dismal (or at least this is what I hope to be the case) but we need to wake up sooner rather than later, revisit what we are doing and reflect on it and in some occasions maybe we even need to have some second thoughts about what we have done so far, what we are doing and most importantly what we are planning to do in the future; starting from right here and right now. 

Alternatively, some of the most optimistic ones of us all may argue that we do not need to worry at all because with machines’ learning capabilities and knowledge repository developing to a level which is obviously outside of our comprehension limits, the iMachine will develop a human touch and may well be able to mimic or even develop human feelings, and emotions, and that in such case issues like ethics make perfect sense to it. I am no pessimistic but even if I settle for feelings and emotions (which I am not prepared to do) with such a controversial topic like ethics alone, good luck with that! The ball is in our court!

One thought on “The Next Challenge for/of AI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s