The Future of AI
To say that Artificial Intelligence is taking the world by storm is an understatement. It's projected to be that all conquering technological advancement in the coming century. Depending on what streams you are employed in this triggers a wide range of emotions ranging from gut wrenching fear to euphoric excitement.
I am more of an observer at this point, interested enough to read more or less anything related to AI that crosses my feed. I satisfy myself with being able to understand that transformers has nothing to do with Autobots and Decepticon and stochastic gradient descent is not a mountaineering technique.
It the last one week I read two articles that for me gave a great perspective on AI as a whole. The first was an article by Rob Toews on Forbes.com. This was about the work of AlphaFold by the DeepMind team in predicting with considerable accuracy how proteins are folded into a 3D architecture. The complexity of proteins are covered in the article but let's just say that the combinations are humungous and the combination determines whether you are a mouse or a human being. Normally the structure of a proteins takes years of experimentation and specialised equipment to arrive at. However as of July 2021, a library of 350,000 protein structures predicted by AlphaFold has been published.
To me the article was fascinating from the point of view that is showed how AI was helping us gain more knowledge and insight into previously unsurmountable problems. Of course this means that the added insight also means that we gain more understanding about what we actually do not know. For a "glass half empty"perspective it also showed how DeepMind was fast gaining in terms of intelligence that could one day threaten my precarious existence as a knowledge professional or even more depressingly take away my ability to commandeer the helm of an automobile through all that self driving blah blah, a skill that I consider existential.
In the midst of this came an article from IEEE Spectrum written by Neil C Thompson which sheds light on the computational limitations faced by Deep Learning. Apparently deep learning models are fast approaching limits of computational capabilities and more importantly incurring huge costs in training deep learning models to achieve the necessary accuracy. This cost is a critical factor in deciding the return on investment of Deep Learning projects and correspondingly their viability. The article touches upon the various work arounds and techniques used by the leading AI research teams to mitigate the cost of training and also CO2 emmissions.
This places the value of human capital in context and illustrates how much ground technology has to cover to match up to a human brain. Again as mentioned about it also shows the frontiers of knowledge remaining to be explored and conquered. All the talk about machines replacing humans is a bit far fetched. Yes, just like the tractor replaced the oxen certain skills will become obselete. As much as I love driving I do see driving as a skill becoming obselete in the next 50 years if not earlier. However my skill as a software architect will still be relevant assuming I use my natural deep learning mechanism (my brain) to discover the great unknown.
Yuval Noah Hariri in his book "Sapiens" says that great progress started in societies who realised that they knowledge that they possessed was but a small modicum of the vast mysteries that the world had to offer. This realisation and the subsequent drive to gain more knowledge and consequently riches hugely spurred human development in the next 750 years to where we are today. In the same way both articles expose the exciting frontiers that remain to be conquered in which AI is but a welcome aid and not necessarily something to be feared.
P.S : the initial outline of this frontier seems to be the Metaverse but more of that later.
No comments:
Post a Comment