The Evolution of Computer Science: from the Static to the Dynamic Paradigm

Written by luc.claustres | Published 2018/01/25
Tech Story Tags: computer-science | static-to-the-dynamic | static-paradigm | dynamic-paradigm | evolution-of-cs

TLDRvia the TL;DR App

the developer philosopher

© http://feaforall.com/wp-content/uploads/2013/04/1.jpg

You can read a lot of things about the evolution of computer science although it is a relatively young recognized science. The most controversial topics tend to be about general paradigms of thinking: object-oriented vs functional programming, declarative vs imperative programming, RISC vs CISC, SQL vs NoSQL, etc. Of the classic debates, most are settled or irrelevant. However, in this brief article I would like to show that the main trend in computer science is a transition from static to dynamic things.

WTF are you talking about ?

In general, dynamic means capable of change, while static means stationary or fixed. Here are a couple of examples, applied to computer science, showing that a lot of things have not been as dynamic as today in the past.

A punched card, i.e. a “static” program

On the first large scale electronic computer called ENIAC, operations were performed by reading encoded instructions into a pattern of holes punched into a paper card. It has been inspired by wooden punch cards used to automate fabric weaves. You could not update your program on-the-fly if you have found a bug or have wanted to optimize it.

Static Web 1.0 vs. Dynamic Web 2.0 © https://www.pinterest.fr/pin/437834394993131730/

According to Tim Berners-Lee himself the Web 1.0 could be considered as the “read-only web”. In other words, the early web only allowed users to search for information and read it. The lack of active interaction lead to the birth of Web 2.0, the “read-write” web. Now even a non-technical user has the ability to contribute content and interact with other web users.

© https://www.talend.com/blog/2017/06/26/what-everyone-should-know-about-machine-learning/

Ordinary algorithms take input and produce output based hard-coded rules and parameters. Machine Learning (ML) algorithms take data to dynamically generate rules and dynamically adjust their parameters. While ML (and more specifically Deep Learning) produces black-boxes virtually impossible for humans to interpret, the rules-based system is easier to understand and will work correctly if you know all the situations under which decisions can be made in advance, but the scope of application is far less general.

Applications have been linked to a target machine for a long time. Plugin-based architectures are now common and allow features to be added on-demand at run-time. Cloud-computing and SaaS provided users with the ability to dynamically load, use and synchronize their applications on different devices. In addition, high availability provided users with dynamic load-balancing, data replication and resource scaling.

These are a couple of examples but if you think about it you will easily find more things that moved from static to dynamic behaviors.

Why it’s a trend that’s here to stay

Given the dynamic world we live in, especially when it involves human behaviors, you might expect computer science products to embrace this paradigm as a more common occurrence than not. In a nutshell, the world around us is dynamic, so computer science is too, unless forced by external constraints like processing power. But there are others reasons why the dynamic trend is here to stay.

On the one hand, static things often require expertise to be changed because their implementation details are not accessible to end-users. They are information intrinsically integrated, not external data that can be manipulated in a friendly way. Like what are reflexes versus mental models. Dynamic configuration provides end-users with autonomy to manually accommodate changing conditions in the environment.

On the other hand, dynamic models are designed to morph to automatically accommodate changing conditions in the environment. Indeed, most of the time, patterns would change with time, and ideally models should self-adjust periodically. For instance today’s weakness of most machine learning models is their inability to adapt to change. When the target properties, which the model is trying to predict, change over time in unforeseen ways, predictions become less accurate as time passes. As a consequence, major users of machine learning models like Google are currently trying to set up pipelines that reliably ingests training datasets and generates models as output continuously.

Last but not least, the Moore law states the constant exponential evolution of processing power, so that you can do more at the same cost as time goes by. The static paradigm leaded the race for decades because computing resources where scarce, thus tooling for software development limited, but that is no longer true. And software is eating the world, even hardware has became programmable. More and more major businesses and industries are being run on software and delivered as online services — from movies to agriculture to national defense.

According to the general static vs. dynamic trend you can also put in perspective more specific trends. That’s why the web is eating everything, because web sites have became dynamic in so many aspects. That’s why Javascript is eating the web development, because it allows to use all programming styles in a dynamic fashion. That’s why AI is eating automation, because processing pipelines are now dynamically adjusted. That’s why server-less will eat infrastructure, because processing resources will be dynamically provisioned.

If you liked this article, hit the applause button below, share with your audience, follow me on Medium or read more for insights about: the raise and next step of Artificial Intelligence, the goodness of unlearning and why software development should help you in life ?


Written by luc.claustres | Digital Craftsman, Co-Founder of Kalisio, Ph.D. in Computer Science
Published by HackerNoon on 2018/01/25