A Hitchhiker’s Guide to AI – a different world
The thesis is that, with the ability to process a much larger amount of information affecting decisions in real-time, and with a greater intensity of the feedback loop, learning can be enhanced and the risk of undesired outcomes can be minimized to a great degree. Humanity has always wanted that - to minimize the risk of undesired outcomes; with the use of dumb devices to do much of the laborious bits involved, that process has become that much less laborious; and consequently, less expensive.
This basic thesis can be applied to any variety of use cases; and that basically forms the industry of chatter and hype which has formed around the idea of AI. I use a few common examples where the deployment of artificial intelligence technologies can have - or have already had - significant and tangible impacts.
Disaster management and Climate change
One of the more prescient uses of artificial intelligence techniques has been in the research done into disaster management and climate change. It had been known for quite some time that human activity has a magnified impact on climate. However, knowledge of the sheer scale of the impact of human activity on the climate, and the likeliness of the outcomes and its impact on humanity in general has been greatly improved through enhanced information collection and learning enabled by the use of artificial intelligence techniques. To imagine, there would have been no (public avatar of) Greta Thunberg had it not been for AI!
It is possible now, through the use of sensors collecting seismic data, to predict earthquakes and to prepare populations in affected areas in advance, saving thousands of lives. The difference between AI and no AI can also be judged by the difference in public responses the next time a tsunami wave hits any of the coastal countries in the world. Sensors tracking river water levels and flow can be used to predict the onset of floods well in advance, enabling populations in areas standing to be affected to prepare themselves in advance; and radically altering the extent to which insurance coverage can be extended to people living under the shadow of these events. These are unqualified instances of desired outcomes stemming from risk minimization, certainly for those whose lives have been saved in the nick of time!
The use of data collection sensors to track the usage of amenities such as electricity and water in real-time and the enhanced feedback loops enabled through greater processing power have enabled the creation of “smart utility networks”, which are able to predict periods of heavy or reduced usage in advance and track abnormal or excessive usage at the smallest level, thereby enabling greater efficiencies in utility distribution.
The burden that is released off the backs of the budgets of governments, which control utility production and distribution in most countries, can, or rather should, translate to either better services or lower tax burdens. This is happening not out in the future, but today. Case in point is the DEWA in Dubai, United Arab Emirates. The colourful and user-friendly ways of presenting data to the consumer has meant that something as dull and mundane, but important, as my electricity and water bill has been reviewed with an interest hitherto unknown!
Medicine and Surgery
There could scarcely be another field where the risk of a negative outcome is as undesired as in the field of medicine and healthcare management. The deployment of artificial intelligence technologies such as miniature data collection devices and the ability to store and rapidly process this information in real-time has enabled medicine to significantly lower the risk of undesired outcomes at the decision-making and action stages. Large-scale processing and analysis of anonymized patient information and the use of pattern-recognition techniques using computers and sensor devices on a scale never before considered possible, enabled by an exponential decline in the cost of these devices, has vastly improved the early detection of the onset of several diseases, enabling early intervention and, in an increasing number of cases, setting the stage for full recovery. The size of data collection sensors has reduced in scale to the chromosomal, and the potential deployment of tiny robots in pill form inside the human body with detailed instructions to perform specific, pre-programmed tasks - such as scrub out the blockage in an arterial tract - is no longer a pipe dream (no pun). In this particular case, since arterial blockage is one of the predominant causes of early death in the population today, the effects of wider deployment and lower costs could be paradigm-changing.
Doctors, patients as well as interested third parties such as insurers and state medical care providers have benefited from the deluge of data and analysis made possible through the deployment of artificial intelligence techniques to devote resources to the medical treatment in a much more efficient and effective manner, saving resources across the spectrum, and dramatically lowering medical costs for all.
It has also enabled people grappling with various medical conditions to increasingly be more proactively engaged with their own recovery and condition-management process. It could be something as simple as a reminder device indicating people to take their medications on time (something which a friend of mine and I ideated a long time ago, to no consequence!), which could be paired with a computer software program or an app which could record these results and present them to the consumer or to interested medical parties.
In combination with ground-breaking advances such as genome sequencing, the pace of medical discoveries has also taken off in ways never before even conceived. The impact of these advances is visible even in the ongoing response to the coronavirus pandemic, where, with the formulation of anti-corona vaccines, what in the absence of artificial intelligence technologies would have taken years if not decades, has been reduced to mere months.
That is what stands to be changed.
With the onset of the deployment of artificial intelligence techniques, the gaps where they operate from - in the “learning” and “decision-making” stages - are also faced with the prospect of being subject to mechanization. The effect of this, as with the introduction of mechanization to agriculture and industry in previous generations, is to weed out inefficiencies, and render the process leaner and more productive, with the ultimate benefit to be derived in the form of desired outcomes in the form of greater quality output in abundance at a lower cost.
It renders the bulk of the advanced economies, where the workforce accounting for the bulk of value addition is predominantly engaged in the learning and decision-making stages, potentially subject to a crushing decline in utility. The need for members of the population engaged in these sectors to re-skill and re-tool themselves for the changed economic circumstances is only made more urgent by the rapidity with which these changes are taking place. The first industrial revolution took a hundred and fifty years to fully pan out. The second industrial revolution took fifty; the third, twenty (?). The world will look very different in 2030.